Updating data in a Finite state machine - java

I am using the FSM framework with AKKA using its Java API to manage state transitions . Here is the relevant portion of the state machine
when(QUEUED,
matchEvent(Exception.class, Service.class,
(exception, dservice) -> goTo(ERROR)
.replying(ERROR)));
// TODO:It seems missing from the DOC that to transition from a state , every state must be
// listed
// a service is in a errored state
when(ERROR,
matchAnyEvent((state, data) -> stay().replying("Staying in Errored state")));
onTransition(matchState(QUEUED, ERROR, () -> {
// Update the Service object and save it to the database
}));
This works as expected and the correct state changes happen with the actor. IN the onTansition() block , I want to update the Service object which is the finite state machine data in this case, something as follows
Service.setProperty(someProperty)
dbActor.tell(saveService);
Is this possible? Am I using this framework in the right way?
I think I was able to do something like the following
onTransition(matchState(QUEUED,ERROR, () -> {
nextStateData().setServiceStatus(ERROR);
// Get the actual exception message here to save to the database
databaseWriter.tell(nextStateData(), getSelf());
}));
How do I now actually test the data thats changed as a result of this transition?
The test looks like this
#Test
public void testErrorState() {
new TestKit(system) {
{
TestProbe probe = new TestProbe(system);
final ActorRef underTest = system.actorOf(ServiceFSMActor.props(dbWriter));
underTest.tell(new Exception(), getRef());
expectMsgEquals(ERROR); // This works
// How do I make sure the data is updated here as part of the OnTransition declaration??
}
};
}

You've defined a probe in the test but you're not using it. Since the FSM actor sends the updated state to the database writer actor, you could test the updated state by replacing the database writer actor with the probe:
new TestKit(system) {{
final TestProbe probe = new TestProbe(system);
final ActorRef underTest = system.actorOf(ServiceFSMActor.props(probe.ref()));
underTest.tell(new Exception(), getRef());
expectMsgEquals(ERROR);
final Service state = probe.expectMsgClass(Service.class);
assertEquals(ERROR, state.getServiceStatus());
}};

Related

RabbitMQ events recieved in quick succession are written twice to MongoDB causing micro service test to fail

Service testing a reactive microservice yields wrong results because RabbitMQ events are double persisted in MongoDB. If 2 events appear in quick succession, the second event should not be persisted because the first event should already be there, and a check should be run to see if it is, and the event should then be processed differently (by inrementing a counter in the existing document).
This happens presumably because events are are published very quickly in succession (and maybe something with mongo's consurrency model or java threading?).
At the rabbitMQ endpoint, when using this code it works .await().indefinitely():
historicalDataHandler
.handleDischargedVerifiedEvent(dischargeVerifiedEvent)
.await()
.indefinitely();
But this code does not (reactive, .subscribe().with()):
historicalDataHandler
.handleDischargedVerifiedEvent(dischargeVerifiedEvent)
.subscribe()
.with(
dbResult -> {
log.infov(
EVENT_HANDLED_LOG_STRING + ", {dischargedVerifiedEvent}, {dbResult}",
dischargeVerifiedEvent,
dbResult);
},
throwable ->
log.errorv(throwable, "Handling discharge verified event fails {event}", event));
I know that the reason is that the first will block the thread until the Uni is resolved. The second version will just call the function and let the thread loose for other tasks.
I prefer using the latter solution because we want our code to be reactive / non-blocking.
Here is the code from the Panache MongoDB repository:
#Startup
#ApplicationScoped
public class FlightAllocationBagCountRepository
implements ReactivePanacheMongoRepositoryBase<FlightAllocationBagCount, String> {
#Inject Logger log;
public Uni<UpdateResult> updateFlightAllocationBagCount(
FlightAllocationBagCount flightAllocationBagCount) {
var allocation = flightAllocationBagCount.getAllocations().get(0);
return this.mongoCollection()
.updateOne(
and(
eq(COMPOSITE_ID, flightAllocationBagCount.getCompositeId()),
eq(
ALLOCATIONS + "." + AllocationPropertyConstants.COMPOSITE_ID,
allocation.getCompositeId())),
inc(ALLOCATIONS + ".$." + BAG_COUNT, 1))
.onItem()
.transformToUni(
incResult -> {
// If the increment was done, the event is now stored
if (incResult.getModifiedCount() > 0) {
log.infov(
"Incremented bag count, {flight}, {allocation}",
flightAllocationBagCount,
allocation);
return Uni.createFrom().item(incResult);
}
// The flight and allocation combination did not exist - create them
var upsertResult =
this.mongoCollection()
.updateOne(
and(eq(COMPOSITE_ID, flightAllocationBagCount.getCompositeId())),
combine(generateBsonUpdates(flightAllocationBagCount)),
new UpdateOptions().upsert(true));
log.infov(
"Flight document created / updated with allocation, {flight}, {allocation}",
flightAllocationBagCount,
allocation);
return upsertResult;
});
}
}
Here is the test that gives the wrong result. What we do in the test is:
Publish some events on the RabbitMQ queue
Wait to see in the logs that those events have been processed
Send a request to the gRPC interface and assert
#Test
void getAverageBagCountBySubAllocation() throws Exception {
// Given two items to same sub allocation on same date
WaitingConsumer consumer = getValidItemEventWaitingConsumer();
publish(getDefaultFlightEvent());
publish(getDefaultItemEvent("BARCODE1"));
publish(getDefaultItemEvent("BARCODE2"));
// and that the events are successfully handled
consumer.waitUntil(
frame -> frame.getUtf8String().contains(EVENT_HANDLED_LOG_STRING), 30, SECONDS);
// And given a request of a specific sub allocation
var allocation =
HistoricalData.Allocation.newBuilder()
.setBagClass(DEFAULT_BAG_CLASS)
.setDestination(DEFAULT_BAG_DESTINATION)
.setBagExceptionType(DEFAULT_BAG_EXCEPTION_TYPE.toGrpcEnum())
.setTransfer(DEFAULT_TRANSFER)
.build();
var request =
HistoricalData.GetAverageBagCountRequest.newBuilder()
.setAirline(DEFAULT_AIRLINE)
.setFlightNumber(DEFAULT_FLIGHT_NUMBER)
.setFlightSuffix(DEFAULT_FLIGHT_SUFFIX)
.setFlightDate(DEFAULT_FLIGHT_DATE_STAMP)
.setMinNumberOfDays(DEFAULT_MIN_NUMBER_OF_DAYS)
.setMaxNumberOfDays(DEFAULT_MAX_NUMBER_OF_DAYS)
.setAllocation(allocation)
.build();
// When we request to get average bag count by sub allocation
var response = getStub().getAverageBagCount(request);
// Then the average bag count is 2, based on the average of 1 days
assertThat(response.getAverageBagCount()).isEqualTo(2);
assertThat(response.getNumberOfDaysUsedForAverageCalculation()).isEqualTo(1);
}
Bonus question: How can we avoid consumer.waitUntil() ? Using that makes the test no longer black-box testing.

RxJava Combine Sequence Of Requests

The Problem
I have two Apis. Api 1 gives me a List of Items and Api 2 gives me more detailed Information for each of the items I got from Api 1. The way I solved it so far results in bad Performance.
The Question
Efficent and fast solution to this Problem with the help of Retrofit and RxJava.
My Approach
At the Moment my Solution Looks like this:
Step 1: Retrofit executes Single<ArrayList<Information>> from Api 1.
Step 2: I iterate through this Items and make a request for each to Api 2.
Step 3: Retrofit Returns Sequentially executes Single<ExtendedInformation> for
each item
Step 4: After all calls form Api 2 completely executed I create a new Object for all Items combining the Information and Extended Information.
My Code
public void addExtendedInformations(final Information[] informations) {
final ArrayList<InformationDetail> informationDetailArrayList = new ArrayList<>();
final JSONRequestRatingHelper.RatingRequestListener ratingRequestListener = new JSONRequestRatingHelper.RatingRequestListener() {
#Override
public void onDownloadFinished(Information baseInformation, ExtendedInformation extendedInformation) {
informationDetailArrayList.add(new InformationDetail(baseInformation, extendedInformation));
if (informationDetailArrayList.size() >= informations.length){
listener.onAllExtendedInformationLoadedAndCombined(informationDetailArrayList);
}
}
};
for (Information information : informations) {
getExtendedInformation(ratingRequestListener, information);
}
}
public void getRatingsByTitle(final JSONRequestRatingHelper.RatingRequestListener ratingRequestListener, final Information information) {
Single<ExtendedInformation> repos = service.findForTitle(information.title);
disposable.add(repos.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribeWith(new DisposableSingleObserver<ExtendedInformation>() {
#Override
public void onSuccess(ExtendedInformation extendedInformation) {
ratingRequestListener.onDownloadFinished(information, extendedInformation);
}
#Override
public void onError(Throwable e) {
ExtendedInformation extendedInformation = new ExtendedInformation();
ratingRequestListener.onDownloadFinished(extendedInformation, information);
}
}));
}
public interface RatingRequestListener {
void onDownloadFinished(Information information, ExtendedInformation extendedInformation);
}
tl;dr use concatMapEager or flatMap and execute sub-calls asynchronously or on a schedulers.
long story
I'm not an android developer, so my question will be limited to pure RxJava (version 1 and version 2).
If I get the picture right the needed flow is :
some query param
\--> Execute query on API_1 -> list of items
|-> Execute query for item 1 on API_2 -> extended info of item1
|-> Execute query for item 2 on API_2 -> extended info of item1
|-> Execute query for item 3 on API_2 -> extended info of item1
...
\-> Execute query for item n on API_2 -> extended info of item1
\----------------------------------------------------------------------/
|
\--> stream (or list) of extended item info for the query param
Assuming Retrofit generated the clients for
interface Api1 {
#GET("/api1") Observable<List<Item>> items(#Query("param") String param);
}
interface Api2 {
#GET("/api2/{item_id}") Observable<ItemExtended> extendedInfo(#Path("item_id") String item_id);
}
If the order of the item is not important, then it is possible to use flatMap only:
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.flatMap(item -> api2.extendedInfo(item.id()))
.subscribe(...)
But only if the retrofit builder is configured with
Either with the async adapter (calls will be queued in the okhttp internal executor). I personally think this is not a good idea, because you don't have control over this executor.
.addCallAdapterFactory(RxJava2CallAdapterFactory.createAsync()
Or with the scheduler based adapter (calls will be scheduled on the RxJava scheduler). It would my preferred option, because you explicitly choose which scheduler is used, it will be most likely the IO scheduler, but you are free to try a different one.
.addCallAdapterFactory(RxJava2CallAdapterFactory.createWithScheduler(Schedulers.io()))
The reason is that flatMap will subscribe to each observable created by api2.extendedInfo(...) and merge them in the resulting observable. So results will appear in the order they are received.
If the retrofit client is not set to be async or set to run on a scheduler, it is possible to set one :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.flatMap(item -> api2.extendedInfo(item.id()).subscribeOn(Schedulers.io()))
.subscribe(...)
This structure is almost identical to the previous one execpts it indicates locally on which scheduler each api2.extendedInfo is supposed to run.
It is possible to tune the maxConcurrency parameter of flatMap to control how many request you want to perform at the same time. Although I'd be cautious on this one, you don't want run all queries at the same time. Usually the default maxConcurrency is good enough (128).
Now if order of the original query matter. concatMap is usually the operator that does the same thing as flatMap in order but sequentially, which turns out to be slow if the code need to wait for all sub-queries to be performed. The solution though is one step further with concatMapEager, this one will subscribe to observable in order, and buffer the results as needed.
Assuming retrofit clients are async or ran on a specific scheduler :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.concatMapEager(item -> api2.extendedInfo(item.id()))
.subscribe(...)
Or if the scheduler has to be set locally :
api1.items(queryParam)
.flatMap(itemList -> Observable.fromIterable(itemList)))
.concatMapEager(item -> api2.extendedInfo(item.id()).subscribeOn(Schedulers.io()))
.subscribe(...)
It is also possible to tune the concurrency in this operator.
Additionally if the Api is returning Flowable, it is possible to use .parallel that is still in beta at this time in RxJava 2.1.7. But then results are not in order and I don't know a way (yet?) to order them without sorting after.
api.items(queryParam) // Flowable<Item>
.parallel(10)
.runOn(Schedulers.io())
.map(item -> api2.extendedInfo(item.id()))
.sequential(); // Flowable<ItemExtended>
the flatMap operator is designed to cater to these types of workflows.
i'll outline the broad strokes with a simple five step example. hopefully you can easily reconstruct the same principles in your code:
#Test fun flatMapExample() {
// (1) constructing a fake stream that emits a list of values
Observable.just(listOf(1, 2, 3, 4, 5))
// (2) convert our List emission into a stream of its constituent values
.flatMap { numbers -> Observable.fromIterable(numbers) }
// (3) subsequently convert each individual value emission into an Observable of some
// newly calculated type
.flatMap { number ->
when(number) {
1 -> Observable.just("A1")
2 -> Observable.just("B2")
3 -> Observable.just("C3")
4 -> Observable.just("D4")
5 -> Observable.just("E5")
else -> throw RuntimeException("Unexpected value for number [$number]")
}
}
// (4) collect all the final emissions into a list
.toList()
.subscribeBy(
onSuccess = {
// (5) handle all the combined results (in list form) here
println("## onNext($it)")
},
onError = { error ->
println("## onError(${error.message})")
}
)
}
(incidentally, if the order of the emissions matter, look at using concatMap instead).
i hope that helps.
Check below it's working.
Say you have multiple network calls you need to make–cals to get Github user information and Github user events for example.
And you want to wait for each to return before updating the UI. RxJava can help you here.
Let’s first define our Retrofit object to access Github’s API, then setup two observables for the two network requests call.
Retrofit repo = new Retrofit.Builder()
.baseUrl("https://api.github.com")
.addConverterFactory(GsonConverterFactory.create())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.build();
Observable<JsonObject> userObservable = repo
.create(GitHubUser.class)
.getUser(loginName)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
Observable<JsonArray> eventsObservable = repo
.create(GitHubEvents.class)
.listEvents(loginName)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
Used Interface for it like below:
public interface GitHubUser {
#GET("users/{user}")
Observable<JsonObject> getUser(#Path("user") String user);
}
public interface GitHubEvents {
#GET("users/{user}/events")
Observable<JsonArray> listEvents(#Path("user") String user);
}
After we use RxJava’s zip method to combine our two Observables and wait for them to complete before creating a new Observable.
Observable<UserAndEvents> combined = Observable.zip(userObservable, eventsObservable, new Func2<JsonObject, JsonArray, UserAndEvents>() {
#Override
public UserAndEvents call(JsonObject jsonObject, JsonArray jsonElements) {
return new UserAndEvents(jsonObject, jsonElements);
}
});
Finally let’s call the subscribe method on our new combined Observable:
combined.subscribe(new Subscriber<UserAndEvents>() {
...
#Override
public void onNext(UserAndEvents o) {
// You can access the results of the
// two observabes via the POJO now
}
});
No more waiting in threads etc for network calls to finish. RxJava has done all that for you in zip().
hope my answer helps you.
I solved a similar problem with RxJava2. Execution of requests for Api 2 in parallel slightly speeds up the work.
private InformationRepository informationRepository;
//init....
public Single<List<FullInformation>> getFullInformation() {
return informationRepository.getInformationList()
.subscribeOn(Schedulers.io())//I usually write subscribeOn() in the repository, here - for clarity
.flatMapObservable(Observable::fromIterable)
.flatMapSingle(this::getFullInformation)
.collect(ArrayList::new, List::add);
}
private Single<FullInformation> getFullInformation(Information information) {
return informationRepository.getExtendedInformation(information)
.map(extendedInformation -> new FullInformation(information, extendedInformation))
.subscribeOn(Schedulers.io());//execute requests in parallel
}
InformationRepository - just interface. Its implementation is not interesting for us.
public interface InformationRepository {
Single<List<Information>> getInformationList();
Single<ExtendedInformation> getExtendedInformation(Information information);
}
FullInformation - container for result.
public class FullInformation {
private Information information;
private ExtendedInformation extendedInformation;
public FullInformation(Information information, ExtendedInformation extendedInformation) {
this.information = information;
this.extendedInformation = extendedInformation;
}
}
Try using Observable.zip() operator. It will wait until both Api calls are finished before continuing the stream. Then you can insert some logic by calling flatMap() afterwards.
http://reactivex.io/documentation/operators/zip.html

SIngle RxJava how to extract object

I can think of two ways to get the value from Single
Single<HotelResult> observableHotelResult =
apiObservables.getHotelInfoObservable(requestBody);
final HotelResult[] hotelResults = new HotelResult[1];
singleHotelResult
.subscribe(hotelResult -> {
hotelResults[0] = hotelResult;
});
Or
final HotelResult hotelResult = singleHotelResult
.toBlocking()
.value();
It's written in the documentation that we should avoid using .toBlocking method.
So is there any better way to get value
Even it is not recommended to block it (you should subscribe), in RxJava v2 the method for blocking is blockingGet(), it returns the object immediately.
When we use toBlocking then we get result immediately. When we use subscribe then result is obtained asynchronously.
Single<HotelResult> observableHotelResult =
apiObservables.getHotelInfoObservable(requestBody);
final HotelResult[] hotelResults = new HotelResult[1];
singleHotelResult.subscribe(hotelResult -> {
hotelResults[0] = hotelResult;
});
// hotelResults[0] may be not initialized here yet
// println not show result yet (if operation for getting hotel info is long)
System.out.println(hotelResults[0]);
For blocking case:
final HotelResult hotelResult = singleHotelResult.toBlocking().value();
// hotelResult has value here but program workflow will stuck here until API is being called.
toBlocking helps in the cases when you are using Observables in "normal" code where you need to have the result in place.
subscribe helps you for example in Android application when you can set some actions in subscribe like show result on the page, make button disabled etc.

How to write and query data from elasticsearch within several concurrent processes?

I am writing a simple application in Scala that listens to Kafka on different topics and when an event on writing comes I write the data down in ElasticSearch. I'm using elastic4s as a wrapper around ElasticSearch Java API. So listening to different topics is implemented as concurrent processes that are run in Futures. This code in my app looks like
Future { container.orangesHandler.runMessagesHandling() }
Future { container.lemonsHandler.runMessagesHandling() }
Future { container.limesHandler.runMessagesHandling() }
Future { container.mandarinsHandler.runMessagesHandling() }
For writing to ElasticSearch I have a helper object that contains an ElasticClient private val client = ElasticClient.transport(uri) which connects to ElasticSearch and by means of which I write data. To implement writing I have the following methods in this object:
def update(indexName: String, objectType: String, data: String) =
checkForIndexAndTypeExistence(indexName, objectType).flatMap { response =>
client.execute {
index into indexName -> objectType source data
}
}
private def checkForIndexAndTypeExistence(indexName: String, objectType: String) =
client.execute(indexExists(indexName)).flatMap { response =>
if (!response.isExists)
client.execute(create index indexName mappings (mapping(objectType) templates(dynamicTemplate)))
else
checkForTypeExistence(indexName, objectType)
}
private def checkForTypeExistence(indexName: String, objectType: String) =
client.execute(typesExist(objectType) in indexName).flatMap { response =>
if (!response.isExists())
client.execute(putMapping(indexName / objectType) templates(dynamicTemplate))
else Future(true)
}
The problem is when I run 3 Futures all works fine. But when I add the fourth Future it doesn't work. Specifically client doesn't work, it just responds with Promise that never completes. One interesting detail: when I run this app on a computer with a quad-core processor it works fine with four Futures.

Make two requests in worker verticle and merge response from two requests

I have vertx server application where I am getting single client requests and from the server, I need to make two blocking calls. For instance, one call to back-end system A and another call to back-end system B. I am looking to make two concurrent calls to both the systems. I need to wait for the responses from both the calls and then merge two data from both the calls and then send the response back to client. I am unable to figure out how to do this in worker verticle.
Could anyone recommend what would be the best approach in vertx?
This sounds like a good use case for Promises. Give the module vertx-promises a try.
create a CompositeFuture from your launched Futures and handle it normally.
public Future<JsonArray> getEntitiesByIndFields(String keyspace, String entidad, String field1, String field2) {
Promise<JsonArray> p = Promise.promise();
// launch in parallel
Future<JsonArray> f1 = getEntitiesByIndField1(keyspace, entidad, field1);
Future<JsonArray> f2 = getEntitiesByIndField2(keyspace, entidad, field2);
CompositeFuture.all(f1, f2).setHandler(done ->
{
if (done.failed()) {
p.fail(done.cause());
return;
}
List<JsonArray> ja = done.result().list();
JsonArray finalarray = ja.get(0);
ja.get(1).forEach(jo ->
{ // add one by one, don't duplicate ids
long id = ((JsonObject) jo).getLong("id");
if (!containsKey(finalarray, id)) {
finalarray.add(jo);
}
});
;
p.complete(finalarray); // send union of founds
});
return p.future();
}

Categories