I have a Spring Boot application that will call several microservice URLs using the GET method. These microservice URL endpoints are all implemented as #RestControllers. They don't return Flux or Mono.
I need my application to capture which URLs are not returning 2xx HTTP status.
I'm currently using the following code to do this:
List<String> failedServiceUrls = new ArrayList<>();
for (String serviceUrl : serviceUrls.getServiceUrls()) {
try {
ResponseEntity<String> response = rest.getForEntity(serviceUrl, String.class);
if (!response.getStatusCode().is2xxSuccessful()) {
failedServiceUrls.add(serviceUrl);
}
} catch (Exception e){
failedServiceUrls.add(serviceUrl);
}
}
// all checks are complete so send email with the failedServiceUrls.
mail.sendEmail("Service Check Complete", failedServiceUrls);
}
The problem is that each URL call is slow to respond and I have to wait for one URL call to complete prior to making the next one.
How can I change this to make the URLs calls be made concurrently? After all call have completed, I need to send an email with any URLs that have an error that should be collected in failedServiceUrls.
Update
I revised the above post to state that I just want the calls to be made concurrently. I don't care that rest.getForEntity call blocks.
Using the executor service in your code, you can call all microservices in parallel this way:
// synchronised it as per Maciej's comment:
failedServiceUrls = Collections.synchronizedList(failedServiceUrls);
ExecutorService executorService = Executors.newFixedThreadPool(serviceUrls.getServiceUrls().size());
List<Callable<String>> runnables = new ArrayList<>().stream().map(o -> new Callable<String>() {
#Override
public String call() throws Exception {
ResponseEntity<String> response = rest.getForEntity(serviceUrl, String.class);
// do something with the response
if (!response.getStatusCode().is2xxSuccessful()) {
failedServiceUrls.add(serviceUrl);
}
return response.getBody();
}
}).collect(toList());
List<Future<String>> result = executorService.invokeAll(runnables);
for(Future f : result) {
String resultFromService = f.get(); // blocker, it will wait until the execution is over
}
If you just want to make calls concurrently and you don't care about blocking threads you can:
wrap the blocking service call using Mono#fromCallable
transform serviceUrls.getServiceUrls() into a reactive stream using Flux#fromIterable
Concurrently call and filter failed services with Flux#filterWhen using Flux from 2. and asynchronous service call from 1.
Wait for all calls to complete using Flux#collectList and send email with invalid urls in subscribe
void sendFailedUrls() {
Flux.fromIterable(erviceUrls.getServiceUrls())
.filterWhen(url -> responseFailed(url))
.collectList()
.subscribe(failedURls -> mail.sendEmail("Service Check Complete", failedURls));
}
Mono<Boolean> responseFailed(String url) {
return Mono.fromCallable(() -> rest.getForEntity(url, String.class))
.map(response -> !response.getStatusCode().is2xxSuccessful())
.subscribeOn(Schedulers.boundedElastic());
}
Blocking calls with Reactor
Since the underlying service call is blocking it should be executed on a dedicated thread pool. Size of this thread pool should be equal to the number of concurrent calls if you want to achieve full concurrency. That's why we need .subscribeOn(Schedulers.boundedElastic())
See: https://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
Better solution using WebClient
Note however, that blocking calls should be avoided when using reactor and spring webflux. The correct way to do this would be to replace RestTemplate with WebClient from Spring 5 which is fully non-blocking.
See: https://docs.spring.io/spring-boot/docs/2.0.3.RELEASE/reference/html/boot-features-webclient.html
Related
I'm using Feign Client in Reactive Java. The Feign client has an interceptor that sends a blocking request to get auth token and adds it as a header to the feign request.
the feign request is wrapped in Mono.FromCallable with Schedulers.boundedElastic().
my question is: does the inner call to get the auth token considered as a blocking call?
I get that both calls will be on a different thread from Schedulers.boundedElastic() but not sure is ok to execute them on the same thread or I should change it so they'll run on different threads.
Feign client:
#FeignClient(name = "remoteRestClient", url = "${remote.url}",
configuration = AuthConfiguration.class, decode404 = true)
#Profile({ "!test" })
public interface RemoteRestClient {
#GetMapping(value = "/getSomeData" )
Data getData();
}
interceptor:
public class ClientRequestInterceptor implements RequestInterceptor {
private IAPRequestBuilder iapRequestBuilder;
private String clientName;
public ClientRequestInterceptor(String clientName, String serviceAccount, String jwtClientId) {
this.iapRequestBuilder = new IAPRequestBuilder(serviceAccount, jwtClientId);
this.clientName = clientName;
}
#Override
public void apply(RequestTemplate template) {
try {
HttpRequest httpRequest = iapRequestBuilder.buildIapRequest(); <---- blocking call
template.header(HttpHeaders.AUTHORIZATION, httpRequest.getHeaders().getAuthorization());
} catch (IOException e) {
log.error("Building an IAP request has failed: {}", e.getMessage(), e);
throw new InterceptorException(String.format("failed to build IAP request for %s", clientName), e);
}
}
}
feign configuration:
public class AuthConfiguration {
#Value("${serviceAccount}")
private String serviceAccount;
#Value("${jwtClientId}")
private String jwtClientId;
#Bean
public ClientRequestInterceptor getClientRequestInterceptor() {
return new ClientRequestInterceptor("Entitlement", serviceAccount, jwtClientId);
}
}
and feign client call:
private Mono<Data> getData() {
return Mono.fromCallable(() -> RemoteRestClient.getData()
.publishOn(Schedulers.boundedElastic());
}
You can sort of tell that it is a blocking call since it returns a concrete class and not a Future (Mono or Flux). To be able to return a concrete class, the thread needs to wait until we have the response to return it.
So yes it is most likely a blocking call.
Reactor recommends that you use the subscribeOn operator when doing blocking calls, this will place that entire chain of operators on its own thread pool.
You have chosen to use the publishOn and it is worth pointing out the following from the docs:
affects where the subsequent operators execute
This in practice means that up until the publishOn operator all actions will be executed using any available anonymous thread.
But all calls after will be executed on the defined thread pool.
private Mono<Data> getData() {
return Mono.fromCallable(() -> RemoteRestClient.getData()
.publishOn(Schedulers.boundedElastic());
}
You have chosen to place it after so the thread pool switch will be done after the call to getData.
publishOns placing in the chain matters while subscribeOn affects the entire chain of operator which means it's placing does not matter.
So to answer your question again, yes it is most likely a blocking call (i can't confirm by 100% since i have not looked into the source code) and how you wish to solve it with either publishOn on subscribeOn is up to you.
Or look into if there is an reactive alternative library to use.
The flow goes controller layer -> Service layer
Here I'm calling processLOBTransactions (Async method) method from the controller layer
How can I join all CompletableFuture responses in the controller layer? my requirement is after the execution of the processLOBTransactions for each list element from the controller, in the controller layer I wanna write logs kind of thing
Could anyone please give any suggestions on how to achieve this?
**Controller Layer:**
class ControllerLayer{
pktransaction.getLineOfBusinessTransaction().stream().forEach((lob) -> {
CompletableFuture<Boolean> futureResponse = flPbtService.processLOBTransactions(lob);
});
***//HERE How can i join all CompletableFuture Responses,and i wanna print logs like all thread completed***
}
**Service layer:**
class ServiceLayer{
#Async("threadPoolTaskExecutor")
public CompletableFuture<Boolean> processLOBTransactions(LineOfBusinessTransaction lobObj) {
// Doing some business logic and returning the CompletableFuture as Response
return CompletableFuture.completedFuture(new Boolean(true));
}
}
All the CompletableFuture<Boolean> futureResponse objects inside the forEach has to be stored in the Collection.
Assuming parallelStream() will not be used, ArrayList can be used to gather these references
Outside the loop, these references can be iterated and get() obtain the result of exception
get() might wait until the actual task completes, so it may be better to use get(timeout, Unit)` to have a deterministic SLA contracts
get can throw exception and be sure handle appropriate actions by catching the exception
if the get with timeout, could not complete within timeout, then you can request cancel if its not a high priority operation assuming the underlying task does not consume the interruption. (it a business logic)
ArrayList<CompletableFuture<Boolean>> futures = new ArrayList<>();
IntStream.range(0, 10).forEach((lob) -> {
CompletableFuture<Boolean> futureResponse = CompletableFuture.completedFuture(new Boolean(true));
futures.add(futureResponse);
});
for (CompletableFuture<Boolean> future : futures) {
try {
System.out.println(future.get());
// or future.get(1, TimeUnit.SECONDS)
} catch (InterruptedException | ExecutionException e) {
System.out.println(e.getMessage());
//future.cancel(true); // if need to cancel the underlying task, assuming the task listens
}
}
I have two Singles (getChannels and getEPGs), both running in parallel. In most cases, getChannels is completed before getEPGs and I can connect the EPG's to the channels. However, I would like to handle the cases where getEPGs are completed before the getChannels.
In other words,
Both Singles are running parallel.
To connect the EPG, the channels must have been loaded.
If the getEPGs is completed before the getChannels, it must wait for the getChannels, and only then a method will be invoked
If the getEPGs fails, the app flow will continue regardless.
How can I accomplish this without relying on callbacks and while loops? I guess that there should be a reactive way to handle this case. Thanks in advance.
#GET
Single<ResponseBody> getChannels(#Url String url);
#Streaming
#GET
Single<ResponseBody> getEPGs(#Url String url);
getChannels [.............................]
getEPGs [..........................................]
Define your functions into a repository and access them through a viewModel.
public class Repository {
#GET
return Single<ResponseBody> getChannels(#Url String url);
#Streaming
#GET
return Single<ResponseBody> getEPGs(#Url String url);
}
Now ViewModel class.
public class SiteListViewModel extends BaseViewModel {
private CompositeDisposable mDisposable;
private Repository mRepository;
public void getData() {
mDisposable.add(
mRepository.getChannels().subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(obj -> {"CALL SECOND FUNCTION OVER HERE"
},throwable -> Log.e("Error", e.printStackTrace()))
);
}
}
2. To connect the EPG, the channels must have been loaded.
Okay,
ResponseBody channels = getChannels.blockingGet();
3. If the getEPGs is completed before the getChannels, it must wait for the getChannels, and only then a method will be invoked
Okay, wait for channels and call a method:
ResponseBody channels = getChannels.blockingGet();
aMethod(channels);
4. If the getEPGs fails, the app flow will continue regardless.
the above statements indeed ignores the result of getEPGs().
So strictly speacing, all your conditions can be expressed simply: wait for getChannels() and ignore getEPGs(). That conditions are fully satisfied by the proposed 2 lines of code.
I have this in my WebSocketHandler implementation:
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.flatMap(webSocketMessage -> {
int id = Integer.parseInt(webSocketMessage.getPayloadAsText());
Flux<EfficiencyData> flux = service.subscribeToEfficiencyData(id);
var publisher = flux
.<String>handle((o, sink) -> {
try {
sink.next(objectMapper.writeValueAsString(o));
} catch (JsonProcessingException e) {
e.printStackTrace();
}
})
.map(session::textMessage);
return publisher;
})
);
}
The Flux<EfficiencyData> is currently generated for testing in the service as follows:
public Flux<EfficiencyData> subscribeToEfficiencyData(long weavingLoomId) {
return Flux.interval(Duration.ofSeconds(1))
.map(aLong -> {
longAdder.increment();
return new EfficiencyData(new MachineSpeed(
RotationSpeed.ofRpm(longAdder.intValue()),
RotationSpeed.ofRpm(0),
RotationSpeed.ofRpm(400)));
}).publish().autoConnect();
}
I am using publish().autoConnect() to make it a hot stream. I created a unit test that starts 2 threads that do this on the returned Flux:
flux.log().handle((s, sink) -> {
LOGGER.info("{}", s.getMachineSpeed().getCurrent());
}).subscribe();
In this case, I see both threads printing out the same value every second.
However, when I open 2 browser tabs, I don't see the same values in both my web pages. The more websocket clients that connect, the more the values are apart (So each value from the original Flux seems to be sent to a different client, instead of sent to all of them).
Managed to fix this thanks to Brian Clozel on twitter.
The issue is that for each connecting websocket client, I call the service.subscribeToEfficiencyData(id) method, which returns a new Flux every time it is called. So of course, those independent Flux'es are not being shared between the different websocket clients.
To fix the issue, I create the Flux instance in the constructor or a PostConstruct method of my service so the subscribeToEfficiencyData returns the same Flux instance every time.
Note that .publish().autoConnect() on the Flux remains important, because without that websocket clients will again see different values!
This is a design question and I am asking for some ideas.
I have a rest method and it will trigger long-time tasks (10~15 minutes)
As the function takes long time, I run it as a thread,
this can avoid method timeout, but how can I know if the thread went wrong?
Runnable loader = new Runnable() {
public void run() {
//tasks
}
};
(new Thread(loader)).start();
Update: the rest service like this
#path()
beginload(){
//let thread run and return info first
//how can i know if this thread went wrong?
(new Thread(loader)).start();
return "need 15 minutes";
}
Conceptually there has to be a way for the service to communicate a failure to the client. There are multiple ways you can do this. Here are three examples:
After the client calls the service, the service immediately returns a job ID. The client can use the job ID later to query the service for the status (including error). For example, when you launch instances at AWS EC2, it takes a while for EC2 to service the request, so the launch request returns a so-called "reservation ID" that you can use in subsequent operations (like querying for status, terminating the launch, etc.).
Pro: Usable in a wide variety of cases, and easy enough to implement.
Con: Requires polling. (I.e. more chatty.)
The client offers a callback URI that the service invokes upon job completion. The callback URI can either be configured into the service, or else passed along as a request parameter. (Don't hardcode the callback URI in the service since services shouldn't depend on their clients.)
Pro: Still pretty simple, and avoids polling.
Con: Client has to have URI for the service to call, which may not be convenient. (E.g. the client may be a desktop app rather than a service, firewall may prevent it, etc.)
The client pushes a notification into a message queue, and the client listens to that queue.
Pro: Avoids polling, and client doesn't need endpoints to call.
Con: More work to set up (requires messaging infrastructure).
There are other possibilities but those are typical approaches.
Do you need to differentiate between different requests? If several tasks to perform, you need an ID.
You can do something like the following:
private static final ExecutorService es = Executors.newFixedThreadPool(10);
private static final Map<Long, Future<Void>> map = new HashMap<>();
#GET
#Path("/submit")
public Response submitTask() {
long id = System.currentTimeMillis();
Future<Void> future = es.submit(new Callable<Void>() {
public Void call() throws Exception {
// long task
// you must throw exception for bad task
return null;
}
});
map.put(id, future);
return Response.ok(id, MediaType.TEXT_PLAIN).build();
}
#GET
#Path("/status/{id}")
public Response submitTask(#PathParam("id") long id) {
Future<Void> future = map.get(id);
if (future.isDone()) {
try {
future.get();
return Response.ok("Successful!", MediaType.TEXT_PLAIN).build();
} catch (InterruptedException | ExecutionException e) {
// log
return Response.ok("Bad task!", MediaType.TEXT_PLAIN).build();
}
}
return Response.ok("Wait a few seconds.", MediaType.TEXT_PLAIN).build();
}
This can give you an idea. Remember purge the map of old tasks.
If you want to get the return value of your thread and throw/catch possible exception, consider use Callable rather than Runnable, and it can be used along with ExecutorService which provide more functionality.
Callable : A task that returns a result and may throw an exception.
Implementors define a single method with no arguments called call.
public interface Callable<V> {
V call() throws Exception;
}