External service call with Java 11 HttpClient sync vs async - java

My microservice is calling an external service POST call and I want to use Java 11 Httpclient. Here how shall the send() and sendAsync() methods can make difference? I have tested with multiple amount of request, almost same latency. I tried executing 100 endpoint call for my service with 10 or 20 thread or more. The result for both methods are almost same.
I use sendAsync() with thenApply().get in response receive.
I would like to know what is preferred way and why? Is using async is also fast(which is not as per my current result)?
Thanks in advance for your answers!

Here's a test of both methods:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.Builder;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.time.Duration;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Executors;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class HttpClientTest {
static final int REQUEST_COUNT = 100;
static final String URI_TEMPLATE = "https://jsonplaceholder.typicode.com/posts/%d";
public static void main(final String[] args) throws Exception {
final List<HttpRequest> requests = IntStream.rangeClosed(1, REQUEST_COUNT)
.mapToObj(i -> String.format(URI_TEMPLATE, i))
.map(URI::create)
.map(HttpRequest::newBuilder)
.map(Builder::build)
.collect(Collectors.toList());
final HttpClient client = HttpClient.newBuilder()
.executor(Executors.newFixedThreadPool(REQUEST_COUNT))
.build();
final ThrowingFunction<HttpRequest, String> sendSync =
request -> client.send(request, BodyHandlers.ofString()).body();
final ThrowingFunction<CompletableFuture<HttpResponse<String>>, String> getSync =
future -> future.get().body();
benchmark("sync", () -> requests.stream()
.map(sendSync)
.collect(Collectors.toList()));
benchmark("async", () -> requests.stream()
.map(request -> client.sendAsync(request, BodyHandlers.ofString()))
.collect(Collectors.toList()) // materialize to send the requests
.stream()
.map(getSync)
.collect(Collectors.toList()));
}
static void benchmark(final String name, final Supplier<List<String>> supplier) {
new Thread(() -> {
final long start = System.nanoTime();
System.out.printf("%s: start%n", name);
final List<String> result = supplier.get();
final Duration duration = Duration.ofNanos(System.nanoTime() - start);
final int size = result.stream()
.mapToInt(String::length)
.sum();
System.out.printf("%s: end, got %d chars, took %s%n", name, size, duration);
}, name).start();
}
#FunctionalInterface
static interface ThrowingFunction<T, R> extends Function<T, R> {
default R apply(final T t) {
try {
return applyThrowing(t);
} catch (final Exception e) {
throw new RuntimeException(e);
}
}
R applyThrowing(T t) throws Exception;
}
}
Example output:
sync: start
async: start
async: end, got 26118 chars, took PT1.6102532S
sync: end, got 26118 chars, took PT4.3368509S
The higher the parallelism level of the API, the better the asynchronous method will perform.

Related

kafka streams abandoned cart development - session window

I am attempting to build out a kstreams app that takes in records from an input topic that is a simple json payload (id and timestamp included - the key is a simple 3 digit string) (there is also no schema required). for the output topic I wish to produce only the records in which have been abandoned for 30 minutes or more (session window). based on this link, I have begun to develop a kafka streams app:
package io.confluent.developer;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.kstream.SessionWindows;
import java.io.FileInputStream;
import java.io.IOException;
import java.time.Duration;
import java.time.Instant;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
import java.time.format.FormatStyle;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
public class SessionWindow {
private final DateTimeFormatter timeFormatter = DateTimeFormatter.ofLocalizedTime(FormatStyle.LONG)
.withLocale(Locale.US)
.withZone(ZoneId.systemDefault());
public Topology buildTopology(Properties allProps) {
final StreamsBuilder builder = new StreamsBuilder();
final String inputTopic = allProps.getProperty("input.topic.name");
final String outputTopic = allProps.getProperty("output.topic.name");
builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()))
.groupByKey()
.windowedBy(SessionWindows.ofInactivityGapAndGrace(Duration.ofMinutes(5), Duration.ofSeconds(10)))
.count()
.toStream()
.map((windowedKey, count) -> {
String start = timeFormatter.format(windowedKey.window().startTime());
String end = timeFormatter.format(windowedKey.window().endTime());
String sessionInfo = String.format("Session info started: %s ended: %s with count %s", start, end, count);
return KeyValue.pair(windowedKey.key(), sessionInfo);
})
.to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));
return builder.build();
}
public Properties loadEnvProperties(String fileName) throws IOException {
Properties allProps = new Properties();
FileInputStream input = new FileInputStream(fileName);
allProps.load(input);
input.close();
return allProps;
}
public static void main(String[] args) throws Exception {
if (args.length < 1) {
throw new IllegalArgumentException("This program takes one argument: the path to an environment configuration file.");
}
SessionWindow tw = new SessionWindow();
Properties allProps = tw.loadEnvProperties(args[0]);
allProps.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
allProps.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, ClickEventTimestampExtractor.class);
Topology topology = tw.buildTopology(allProps);
ClicksDataGenerator dataGenerator = new ClicksDataGenerator(allProps);
dataGenerator.generate();
final KafkaStreams streams = new KafkaStreams(topology, allProps);
final CountDownLatch latch = new CountDownLatch(1);
// Attach shutdown handler to catch Control-C.
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close(Duration.ofSeconds(5));
latch.countDown();
}
});
try {
streams.cleanUp();
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
static class ClicksDataGenerator {
final Properties properties;
public ClicksDataGenerator(final Properties properties) {
this.properties = properties;
}
public void generate() {
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
}
}
}
package io.confluent.developer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.processor.TimestampExtractor;
public class ClickEventTimestampExtractor implements TimestampExtractor {
#Override
public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {
System.out.println(record.value());
return record.getTimestamp();
}
}
i am having issues withe the following:
getting the code to compile - I keep getting this error (I am new to java so please bear with me). what is the correct way to call the getTimestamp?:
error: cannot find symbol
return record.getTimestamp();
^
symbol: method getTimestamp()
location: variable record of type ConsumerRecord<Object,Object>
1 error
not sure if the timestamp extractor will work for this particular scenario. I read here that 'The Timestamp extractor can only give you one timestamp'. does that mean that if there are multiple messages with different keys this wont work? some clarification or examples would help.
thanks!

What is the purpose of CompletableFuture's complete method?

I've been doing some reading about CompletableFuture.
As of now I understand that CompletableFuture is different from Future in a sense that it provides means to chain futures together, to use callback to handle Future's result without actually blocking the code.
However, there is this complete() method that I'm having a hard time wrapping my head around. I only know that it allows us to complete a future manually, but what is the usage for it? The most common examples I found for this method is when doing some async task, we can immediately return a string for example. But what is the point of doing so if the return value doesn't reflect the actual result? If we want to do something asynchronously why don't we just use regular future instead? The only use I can think of is when we want to conditionally cancel an ongoing future. But I think I'm missing some important key points here.
complete() is equivalent to the function transforming the previous stage's result and returning getResponse("a1=Chittagong&a2=city")
response, you can run this method in a different thread
when getResponse() methods response available then thenApply() will be invoked to print log.
no one will be blocked if you run getResponse(String url) in a different thread.
This example shows a scenario where we are printing a log while getting responses from complete();
Code
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CompletableFutureEx {
Logger logger = Logger.getLogger(CompletableFutureEx.class.getName());
public static void main(String[] args) {
new CompletableFutureEx().completableFutureEx();
}
private void completableFutureEx() {
var completableFuture = new CompletableFuture<String>();
completableFuture.thenApply(response -> {
logger.log(Level.INFO, "Response : " + response);
return response;
});
//some long process response
try {
completableFuture.complete(getResponse("a1=Chittagong&a2=city"));
} catch (Exception e) {
completableFuture.completeExceptionally(e);
}
try {
System.out.println(completableFuture.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
private String getResponse(String url) throws URISyntaxException, IOException, InterruptedException {
var finalUrl = "http://localhost:8081/api/v1/product/add?" + url;
//http://localhost:8081/api/v1/product/add?a1=Chittagong&a2=city
var request = HttpRequest.newBuilder()
.uri(new URI(finalUrl)).GET().build();
var response = HttpClient.newHttpClient()
.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("response body " + response.body());
return response.body();
}
}

How to stop reactor websocket connection

Given this example from reactor docs:
import io.netty.buffer.Unpooled;
import io.netty.util.CharsetUtil;
import reactor.core.publisher.Flux;
import reactor.netty.http.client.HttpClient;
public class Application {
public static void main(String[] args) {
HttpClient client = HttpClient.create();
client.websocket()
.uri("wss://echo.websocket.org")
.handle((inbound, outbound) -> {
inbound.receive()
.asString()
.take(1)
.subscribe(System.out::println);
final byte[] msgBytes = "hello".getBytes(CharsetUtil.ISO_8859_1);
return outbound.send(Flux.just(Unpooled.wrappedBuffer(msgBytes), Unpooled.wrappedBuffer(msgBytes)))
.neverComplete();
})
.blockLast();
}
}
How to stop and disconnect from websocket completely when take(1) or any other condition is true? Now it hangs indefinitely and does not exit
It seems this sample can be fixed by removing neverComplete as follows:
import io.netty.util.CharsetUtil;
import reactor.core.publisher.Mono;
import reactor.netty.http.client.HttpClient;
public class Application {
public static void main(String[] args) {
HttpClient client = HttpClient.create();
client.websocket()
.uri("wss://echo.websocket.org")
.handle((inbound, outbound) -> {
return outbound.sendString(Mono.just("hello"), CharsetUtil.UTF_8)
.then()
.thenMany(inbound.receive()
.asString()
.take(1)
.doOnNext(System.out::println));
})
.blockLast();
}
}

rxjava2 for load testing inconsistencies as result of onComplete event

There is code that is supposed to do the load testing form some function that performs the http call (we call it callInit here) and collects some data in the LoaTestMetricsData:
the collected responses
and the total duration of the execution.
import io.reactivex.Observable;
import io.reactivex.Scheduler;
import io.reactivex.Single;
import io.reactivex.observers.TestObserver;
import io.reactivex.schedulers.Schedulers;
import io.reactivex.subjects.PublishSubject;
import io.reactivex.subjects.Subject;
import io.restassured.internal.RestAssuredResponseImpl;
import io.restassured.response.Response;
import org.junit.jupiter.api.Test;
import java.time.Duration;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Supplier;
import static java.lang.Thread.sleep;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.allOf;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.lessThan;
public class TestRx {
#Test
public void loadTest() {
int CALL_N_TIMES = 10;
final long CALL_NIT_EVERY_MILLISECONDS = 100;
final LoaTestMetricsData loaTestMetricsData = loadTestHttpCall(
this::callInit,
CALL_N_TIMES,
CALL_NIT_EVERY_MILLISECONDS
);
assertThat(loaTestMetricsData.responseList.size(), is(equalTo(Long.valueOf(CALL_N_TIMES).intValue())));
long errorCount = loaTestMetricsData.responseList.stream().filter(x -> x.getStatusCode() != 200).count();
long executionTime = loaTestMetricsData.duration.getSeconds();
//assertThat(errorCount, is(equalTo(0)));
assertThat(executionTime , allOf(greaterThanOrEqualTo(1L),lessThan(3L)));
}
// --
private Single<Response> callInit() {
try {
return Single.fromCallable(() -> {
System.out.println("...");
sleep(1000);
Response response = new RestAssuredResponseImpl();
return response;
});
} catch (Exception ex) {
throw new RuntimeException(ex.getMessage());
}
}
// --
private LoaTestMetricsData loadTestHttpCall(final Supplier<Single<Response>> restCallFunction, long callnTimes, long callEveryMilisseconds) {
long startTimeMillis = System.currentTimeMillis();
final LoaTestMetricsData loaDestMetricsData = new LoaTestMetricsData();
final AtomicInteger atomicInteger = new AtomicInteger(0);
final TestObserver<Response> testObserver = new TestObserver<Response>() {
public void onNext(Response response) {
loaDestMetricsData.responseList.add(response);
super.onNext(response);
}
public void onComplete() {
loaDestMetricsData.duration = Duration.ofMillis(System.currentTimeMillis() - startTimeMillis);
super.onComplete();
}
};
final Subject<Response> subjectInitCallResults = PublishSubject.create(); // Memo: Subjects are hot so if you don't observe them the right time, you may not get events. Thus: subscribe first then emit (onNext)
final Scheduler schedulerIo = Schedulers.io();
subjectInitCallResults
.subscribeOn(schedulerIo)
.subscribe(testObserver); // subscribe first
final Observable<Long> source = Observable.interval(callEveryMilisseconds, TimeUnit.MILLISECONDS).take(callnTimes);
source.subscribe(x -> {
final Single<Response> singleResult = restCallFunction.get();
singleResult
.subscribeOn(schedulerIo)
.subscribe( result -> {
int count = atomicInteger.incrementAndGet();
if(count == callnTimes) {
subjectInitCallResults.onNext(result); // then emit
subjectInitCallResults.onComplete();
} else {
subjectInitCallResults.onNext(result);
}
});
});
testObserver.awaitTerminalEvent();
testObserver.assertComplete();
testObserver.assertValueCount(Long.valueOf(callnTimes).intValue()); // !!!
return loaDestMetricsData;
}
}
The: LoaTestMetricsData is defined as:
public class LoaTestMetricsData {
public List<Response> responseList = new ArrayList<>();
public Duration duration;
}
Sometimes test fails with this error:
java.lang.AssertionError: Value counts differ; expected: 10 but was: 9 (latch = 0, values = 9, errors = 0, completions = 1)
Expected :10
Actual :9 (latch = 0, values = 9, errors = 0, completions = 1)
<Click to see difference>
If someone could tell me why?
As is some of the subjectInitCallResults.onNext() has not been executed, or consumed. But why.. I understand that PublishSubject is hot observable, thus I subscribe for the events before emitting/onNext anything to it.
UPDATE:
What would fix it, is this ugly code, that would wait for the subject to fill up:
while(subjectInitCallResults.count().blockingGet() != callnTimes) {
Thread.sleep(100);
}
..
testObserver.awaitTerminalEvent();
But is the proper / better way of doing it?
Thanks.

Integrating streaming events from eventstore with akka actors using akka streams in Java

I am working with GetEventStore as the journal provider for the events persisted by akka-persistence and accessing the akka.persistence.query.javadsl to query the events from eventstore. The actor system and the journal provider is configured using spring.
The eventstore configuration is the following:
eventstore {
# IP & port of Event Store
address {
host = "xxxx"
port = 1113
}
http {
protocol = "http"
port = 2113
prefix = ""
}
# The desired connection timeout
connection-timeout = 10s
# Maximum number of reconnections before backing, -1 to reconnect forever
max-reconnections = 100
reconnection-delay {
# Delay before first reconnection
min = 250ms
# Maximum delay on reconnections
max = 1s
}
# The default credentials to use for operations where others are not explicitly supplied.
credentials {
login = "admin"
password = "changeit"
}
heartbeat {
# The interval at which to send heartbeat messages.
interval = 500ms
# The interval after which an unacknowledged heartbeat will cause the connection to be considered faulted and disconnect.
timeout = 5s
}
operation {
# The maximum number of operation retries
max-retries = 10
# The amount of time before an operation is considered to have timed out
timeout = 500s
}
# Whether to resolve LinkTo events automatically
resolve-linkTos = false
# Whether or not to require EventStore to refuse serving read or write request if it is not master
require-master = true
# Number of events to be retrieved by client as single message
read-batch-size = 990
# The size of the buffer in element count
buffer-size = 100000
# Strategy that is used when elements cannot fit inside the buffer
# Possible values DropHead, DropTail, DropBuffer, DropNew, Fail
buffer-overflow-strategy = "DropHead"
# The number of serialization/deserialization functions to be run in parallel
serialization-parallelism = 8
# Serialization done asynchronously and these futures may complete in any order,
# but results will be used with preserved order if set to true
serialization-ordered = true
cluster {
# Endpoints for seeding gossip
# For example: ["127.0.0.1:1", "127.0.0.2:2"]
gossip-seeds = []
# The DNS name to use for discovering endpoints
dns = null
# The time given to resolve dns
dns-lookup-timeout = 2s
# The well-known endpoint on which cluster managers are running
external-gossip-port = 30778
# Maximum number of attempts for discovering endpoints
max-discover-attempts = 10
# The interval between cluster discovery attempts
discover-attempt-interval = 500ms
# The interval at which to keep discovering cluster
discovery-interval = 1s
# Timeout for cluster gossip
gossip-timeout = 1s
}
persistent-subscription {
# Whether to resolve LinkTo events automatically
resolve-linkTos = false
# Where the subscription should start from (position)
start-from = last
# Whether or not in depth latency statistics should be tracked on this subscription.
extra-statistics = false
# The amount of time after which a message should be considered to be timedout and retried.
message-timeout = 30s
# The maximum number of retries (due to timeout) before a message get considered to be parked
max-retry-count = 500
# The size of the buffer listening to live messages as they happen
live-buffer-size = 500
# The number of events read at a time when paging in history
read-batch-size = 100
# The number of events to cache when paging through history
history-buffer-size = 20
# The amount of time to try to checkpoint after
checkpoint-after = 2s
# The minimum number of messages to checkpoint
min-checkpoint-count = 10
# The maximum number of messages to checkpoint if this number is a reached a checkpoint will be forced.
max-checkpoint-count = 1000
# The maximum number of subscribers allowed
max-subscriber-count = 0
# The [[ConsumerStrategy]] to use for distributing events to client consumers
# Known are RoundRobin, DispatchToSingle
# however you can provide a custom one, just make sure it is supported by server
consumer-strategy = RoundRobin
}
}
The journal provider code is the following:
package com.org.utils;
import static akka.stream.ActorMaterializer.create;
import static java.util.concurrent.CompletableFuture.allOf;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import akka.Done;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.japi.function.Predicate;
import akka.japi.function.Procedure;
import akka.persistence.query.EventEnvelope;
import akka.persistence.query.PersistenceQuery;
import akka.persistence.query.javadsl.AllPersistenceIdsQuery;
import akka.persistence.query.javadsl.CurrentEventsByPersistenceIdQuery;
import akka.persistence.query.javadsl.CurrentPersistenceIdsQuery;
import akka.persistence.query.javadsl.EventsByPersistenceIdQuery;
import akka.persistence.query.javadsl.ReadJournal;
import akka.stream.ActorMaterializer;
import akka.stream.Materializer;
import akka.stream.javadsl.Source;
import lombok.extern.log4j.Log4j;
#Service
#Log4j
public class JournalProvider {
private ActorSystem system;
private ReadJournal readJournal;
#Autowired
public JournalProvider(ActorSystem system) {
super();
this.system = system;
}
#SuppressWarnings({ "rawtypes", "unchecked" })
public ReadJournal journal(ActorSystem system) {
if (readJournal == null) {
String queryJournalClass = system.settings().config().getString("queryJournalClass");
String queryIdentifier = system.settings().config().getString("queryIdentifier");
if (queryJournalClass == null || queryIdentifier == null) {
throw new RuntimeException(
"Please set queryIdentifier and queryJournalClass variables in application.conf or reference.conf");
}
try {
Class clasz = Class.forName(queryJournalClass);
readJournal = PersistenceQuery.get(system).getReadJournalFor(clasz, queryIdentifier);
} catch (ClassNotFoundException e) {
throw new RuntimeException("Caught exception : " + e);
}
}
return readJournal;
}
public CompletableFuture<Void> runForEachId(Procedure<EventEnvelope> function,
Map<String, Long> idsWithStartSequenceNr) {
List<CompletableFuture<Done>> allFutures = new ArrayList<>();
for (String id : idsWithStartSequenceNr.keySet()) {
Long fromSequenceNr = idsWithStartSequenceNr.get(id);
CompletionStage<Done> mapPreparedCompletionStage = runForEachEvent(id, fromSequenceNr, function);
allFutures.add(mapPreparedCompletionStage.toCompletableFuture());
}
CompletableFuture<Void> combinedFuture = allOf(allFutures.toArray(new CompletableFuture[0]));
return combinedFuture;
}
public CompletionStage<Done> runForEachEvent(String id, long sequenceNr, Procedure<EventEnvelope> function) {
ActorMaterializer materializer = ActorMaterializer.create(system);
Source<EventEnvelope, NotUsed> eventsForId = ((CurrentEventsByPersistenceIdQuery) journal(system))
.currentEventsByPersistenceId(id, sequenceNr, Long.MAX_VALUE);
return eventsForId.runForeach(function, materializer);
}
public final List<Object> fetchEventsByPersistenceId1(String id, Predicate<EventEnvelope> filter) {
List<Object> allEvents = new ArrayList<>();
try {
((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE)
.filter(filter).runForeach((event) -> allEvents.add(event.event()), create(system)).toCompletableFuture()
.get();
} catch (InterruptedException | ExecutionException e) {
log.error(" Error while getting currentEventsForPersistenceId for id " + id, e);
}
return allEvents;
}
public List<Object> fetchEventsByPersistenceId(String id) {
List<Object> allEvents = new ArrayList<>();
try {
((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE)
.runForeach((event) -> allEvents.add(event.event()), create(system)).toCompletableFuture()
.get();
} catch (InterruptedException | ExecutionException e) {
log.error(" Error while getting currentEventsForPersistenceId for id " + id, e);
}
return allEvents;
}
#SafeVarargs
public final List<String> currentPersistenceIds(Materializer materializer, Predicate<String>... filters)
throws InterruptedException, ExecutionException {
Source<String, NotUsed> currentPersistenceIds = ((CurrentPersistenceIdsQuery) journal(system))
.currentPersistenceIds();
for (Predicate<String> filter : filters)
currentPersistenceIds = currentPersistenceIds.filter(filter);
List<String> allIds = new ArrayList<String>();
CompletionStage<Done> allIdCompletionStage = currentPersistenceIds.runForeach(id -> allIds.add(id), materializer);
allIdCompletionStage.toCompletableFuture().get();
return allIds;
}
#SafeVarargs
public final Source<String, NotUsed> allPersistenceIds(Predicate<String>... filters) {
Source<String, NotUsed> allPersistenceIds = ((AllPersistenceIdsQuery) journal(system)).allPersistenceIds();
for (Predicate<String> filter : filters)
allPersistenceIds = allPersistenceIds.filter(filter);
return allPersistenceIds;
}
public final Source<EventEnvelope, NotUsed> currentEventsSourceForPersistenceId(String id) {
return ((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE);
}
public final Source<EventEnvelope, NotUsed> allEventsSourceForPersistenceId(String id) {
return allEventsSourceForPersistenceId(id, 0, Long.MAX_VALUE);
}
public final Source<EventEnvelope, NotUsed> allEventsSourceForPersistenceId(String id, long from, long to) {
return ((EventsByPersistenceIdQuery) journal(system)).eventsByPersistenceId(id, from, to);
}
}
The eventstore is populated with the relevant events through the actor system and the following code tests the incoming events and consumes them through an actor as the sink.
The issue that i am facing here are that some messages are being dropped and not all the events are being fed to the stream mapping function.
package com.org.utils;
import static com.wt.utils.akka.SpringExtension.SpringExtProvider;
import java.util.List;
import java.util.concurrent.ExecutionException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.stereotype.Service;
import com.org.domain.Ad;
import com.wt.domain.Px;
import com.wt.domain.repo.AdRepo;
import com.wt.domain.write.events.AdCalc;
import akka.NotUsed;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.PoisonPill;
import akka.stream.ActorMaterializer;
import akka.stream.javadsl.MergeHub;
import akka.stream.javadsl.RunnableGraph;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
#Service("Tester")
public class Tester {
private AdRepo adRepo;
private ActorRef readerActor;
private ActorMaterializer materializer;
private JournalProvider provider;
#Autowired
public Tester(JournalProvider provider, AdRepo adRepo, ActorSystem system) {
super();
this.provider = provider;
this.adRepo= adRepo;
this.readerActor = system.actorOf(SpringExtProvider.get(system).props("ReaderActor"), "reader-actor");
this.materializer = ActorMaterializer.create(system);
}
public void testerFunction() throws InterruptedException, ExecutionException {
// retrieve events of type Event1 from eventstore
Source<Event1, NotUsed> event1 = provider.allEventsSourceForPersistenceId("persistence-id")
.filter(evt -> evt.event() instanceof Event1)
.map(evt -> (Event1) evt.event());
// fetch a list of domain object of type Ad from the repository
List<Ad> adSym= adRepo.findBySymbol("symbol-name");
Ad ad = adSym.stream().findAny().get();
// map the event1 source stream to AdCalc domain event source stream
// the ad.calculator function returns a source of AdCalc domain event source stream
// Here lies the issue. Not all the event1 source objects are being converted to
// AdCalc domain event objects and are being dropped
Source<AdCalc, NotUsed> adCalcResult = event1.map(evt-> ad.calculator(evt, evt.getData());
Sink<AdCalc, NotUsed> consumer = Sink.actorRef(readerActor, PoisonPill.getInstance());
RunnableGraph<Sink<AdCalc, NotUsed>> runnableGraph = MergeHub
.of(AdCalc.class).to(consumer);
Sink<AdCalc, NotUsed> resultAggregator = runnableGraph.run(materializer);
adCalcResult .runWith(resultAggregator , materializer);
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
try (AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(CoreAppConfiguration.class)) {
Tester tester= (Tester) ctx
.getBean("Tester");
Tester.historicalPerformance();
}
}
}
Here is the actor that does the processing
package com.org.utils;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Service;
import com.wt.domain.write.events.AdCalc;
import akka.actor.UntypedActor;
#Scope("prototype")
#Service("ReaderActor")
public class ReaderActor extends UntypedActor {
public void onReceive(Object message) throws Exception {
if (message instanceof AdCalc) {
final AdCacl adCalculation = (AdCalc) message;
// the above event also consists a timestamp and by that and ofcourse the persistence id of the events in the eventstore,
// i realize that not all events are being processed and are being dropped
System.out.println(adCalculation);
} else
unhandled(message);
context().system().stop(getSelf());
}
}
The issue as mentioned in the above code snippet comments are:
The incoming source stream is dropping a lot of events and that some events are not being transmitted to the actor.
I need some help with the syntax for mapAsync stream integration as the one given in the document gives compilation issue.
A syntax for actorWithRef again for the stream integration would be very helpful. The Akka documentation does not have that.
Thanks a ton !

Categories