I need to update 3 data regions by making a synchronous call to one data region and asynchronous call to other 2 data regions using Java n spring.which is the best way to implement this?
Apart from the term region I understood you want to make a few http requests. One of them (the first) has to be blocking.
I would suggest you to have a look into spring's WebClient which let you make multiple requests in parallel.
The first (blocking) one can be achieved by a blocking Mono.
Here you can find a tutorial on Simultaneous Spring WebClient Calls:
https://www.baeldung.com/spring-webclient-simultaneous-calls
Cheers
Related
I am new to Microservices. I am currently developing an application using Microservices and using both synchronous and asynchronous communication.
Recently I have read many articles saying that you shouldn't use synchronous(HTTP) communication and should only use asynchronous(message broker). A few have mentioned - If the Microservices are communicating via REST, then you still have, in effect, a monolithic application.
Consider a scenario where we have 2 Microservices (MS) :
CurrencyConversion MS - We will pass input to this MS as we want to convert $100 to INR. CurrencyConversion MS will execute a GET call to CurrencyExchange MS to get exchange rate for $ to INR.
CurrencyExchange MS - We will pass input to this MS as $ to INR and CurrencyExchange MS will return the exchange rate as 75 i.e. $1 = 75 INR.
In such cases, CurrencyConversion can't work independently and if CurrencyExchange is failing, CurrencyConversion is also going to fail.
So my first question is - Is synchronous communication between services an anti-pattern in Microservices?
The second question is - If synchronous communication is not a preferred way then what is the best way to design communication between different internal services where one service is going to execute a GET call to get some dependent data for example the scenario I have mentioned above.
How do we overcome this without using synchronous communication?
When you are on a microservices project, it is very frequent that microservices need other microservices. As you said, there are several ways to communicate between them: synchronously or asynchronously.
For my part, I think that there is no good or bad choice between synchronous and asynchronous, what you need to do is to choose what best meets your needs.
In the case you mention, I would personally choose a synchronous HTTP call simply because if you made an asynchronous call, it would be more difficult to know if your MS has received the request and especially when it will answer it. This could force you to block the call from your client for a while because he is calling you synchronously in HTTP on a REST resource.
However, if your client does not expect an immediate response to his call, you can very well start with an asynchronous call and provide a notification system to inform your client that the response to his request is ready.
In any case, synchronous calls between microservices should not be considered as anti-patterns. Synchronous and asynchronous calls each meet different needs, so you have to choose which one is more appropriate in your case.
Finally, whether you do synchronous or asynchronous, there are still several ways to do it. Here is a link that explains, I think, quite well the different possibilities for these two solutions : https://dzone.com/articles/patterns-for-microservices-sync-vs-async
Synchronous communication between services is not an anti-pattern in Microservices. But it's important to choose a appropriate communication style depending on the specified quality requirements. Microservices.io describes some communication patterns with pros & cons, tradeoffs and examples.
In such cases, CurrencyConversion can't work independently and if CurrencyExchange is failing, CurrencyConversion is also going to fail.
In your example the two MS are highly coupled cause they need to work together in a synchronous transaction to answer the user request. Assuming that the user wants a response within a specific time interval (lets say 50ms), synchronous communication seems appropriate. Cascading errors can be counteracted with resilience patterns (circuit breaker, bulk head, etc.). In my opinion the example functionality should get deployed in just one MS (Currency-Service). The two described operations and the underlaying domain model seem highly cohesive. That's a strong signal you should not split the functionality into multiple MS. Communication problems solved :)
I'm trying to create an architecture using Java Spring which will have several background processes which will be running concurrently, listening and pulling information as it arrives from different ZMQ sockets.
I'm not sure the best way to do this. Right now, i'm using the #Async annotation with a TaskPoolExecutor, but the #Async function seems to be blocking the next function call in the stack.
So my questions are
1) Will an #Async function block the next function call in the stack? Or will it fire off that function in a new thread, and continue executing the functions in the current thread.
2) Is there any way to give each Thread an equal timeslice of computing power.
3) Are there any better ways to do this?
Thanks!
#Async will run the annotated method asynchronously using the
specified executor.
There is no way to control OS resources
dedicated to threads.
Java has a very convenient
CompletableFuture API for asynchronous computations. I've
recently wrote a blog post about the problems with #Async and how
they can be solved with CompletableFuture: Demystifying the Magic
of Spring: #Async .
I have Flux<URL>. How can I make multiple concurrent void requests for each URL (for example myWebClient.refresh(URL)), then (after all requests done) read data from the database and return Flux<MyAnyEntity> (for example repo.findAll())?
You can achieve that using Flux/Mono operators:
// get the URIs from somewhere
Flux<URI> uris = //...
Flux<MyAnyEntity> entities = uris
// map each URI to a HTTP client call and do nothing with the response
.flatMap(uri -> webClient.get().uri(uri).exchange().then())
// chain that call with a call on your repository
.thenMany(repo.findAll());
Update:
This code is naturally asynchronous, non-blocking so all operations in the flatMap operator will be executed concurrently, according to the demand communicated by the consumer (this is the backpressure we're talking about).
If the Reactive Streams Subscriber request(N) elements, then N requests might be executed concurrently. I don't think this is not something you want to deal with directly, although you can influence things using windowing operators for micro-bacthing operations.
Using .subscribeOn(Schedulers.parallel()) will not improve concurrency in this case - as stated in the reference documentation you should only use that for CPU-bound work.
Angular 4 application sends a list of records to a Java spring MVC application that has been deployed in Websphere 8 Servlet container. The list is then inserted into to a temp table. After the batch insert, a procedure call is made in order to do some calculations and return results. Depending on the size of the list that was inserted into temp table it may take anywhere between: 3000ms( N ~ 500 ), 6000ms( N ~ 1000 ), 50,000+ms ( N > 2000 ).
My asendach would be to create chunks of data and simultaneously send them to database for processing. After threads (Futures) return results I would aggregate them and return back to the client. To sum up, I would split a synchronous call into multiple asynchronous processes(simultaneously executed) and return back to the client over the same thread that initiated HTTP call - landed into my controller.
Everything would be fine and I would not be asking this questions if a more experienced colleague of mine was not strongly disagreeing with this approach. His reasoning is that using this approach is prone to exceptions due to thread interrupts / timeouts / semaphores and so on. Hi is going as far as saying that multithreading should be avoided within a web container because it can crash the Servlet container in case it runs out of threads.
He proposes that we should have the browser send multiple AJAX requests and aggregates/present data in chunks.
Can you please help me understand which approach is better and why?
I would say that your approach is much better.
Threads created by application logic aren't application container threads and limited only by operating system. While each AJAX request uses a thread from application container. So the second approach reduces throughput and increases the possibility of reaching application container limit while and the first one not. Performance also should be considered because it's much cheaper to create a thread than to send a request over network. Plus each network requests uses additional resources for authentication/authorization/encryption etc.
It's definetely harder to write correct multithread code and it can easily prone to errors. However it shouldn't stop you from doing it because concurrency can significantly increase your performance. It's pretty straightforward to handle interrupts and timeouts using Future and you for sure don't need semaphores here.
Exposing this logic to client looks like breaking of encapsulation. Imagine that you use rest api which forces you to send multiple request by splitting you data in chunks. What chunk size should i use? How to deal with timeouts/interrupts? How many requests should i sent? etc. You will have almost the same challenges in both approaches, but it's much easier to deal with them using specially designed for this libraries like ExecutorService and Future.
What I am doing:
I am using play 2.5.7 (java) and trying to build a REST application.
When I get a call on my controller I ask the first actor, this actor can only solve part of the problem (getting additional data), which needs to be forwarded to another actor which uses the request data and additional data to update some more data, send an async void call (tell) to another actor and respond to the controller. All these (4) actors are #Injected in other actors or controller with Guice.
Flow of calls:
controller --(Patterns.ask)--> actor1 --(actor.forward)--> actor2 --(actor.forward)--> actor3 (-tell-> actor4) and --(sender().tell)--> controller.
Issue:
This works for first 4 calls. Then on actor1.forward keeps failing on every consecutive request; Patterns.ask times out. System.out on the line before actor1.forward works but not the actual forward. No matter the timeout value (tried even 20s). No change done in the request; I just hit the send button in postman every time.
I have two questions:
Why 4? Why does it fail after 4th request? Is it some config? What should I look for in config?
Is what I am doing with actors correct way to build a REST web service?
Update: I found the issue; it was caused due to consumption of Redis connections through the pool and never freeing them. But the second question I had still remains, is what I am doing here advisable?
Sure, this could be a reasonable design. But I would consider though whether it would be more maintainable to work with Future returning methods, unless your workflow requires some complex protocol between multiple moving pieces or internal state. It may also be worth considering Akka Streams, if your processing doesn't map well to async method calls.
Basically, actors are a pretty low-level tool. To the extent that you need them, I would try to minimize the surface area of your application where they are being directly used. Higher-level abstractions are better, where possible.