So we have kind of a nasty spring mvc set up where the path of a request is basically:
gateway -> controller -> service -> dmzgateway -> dmzcontroller -> dmzservice -> external gateway
Essentially at the first gateway a user is passing in a date range and i'd like to asynchronously process each month within the range in the controller:
#Async
public Response processFoo(LocalDate fromdate,LocalDate toDate){
List<Response> responses;
CompletableFuture<Response> response;
while(fromDate.isBefore(toDate)){
response = CompleteableFuture.supplyAsync(processService(fromDate,fromDate.plusDays(30)));
fromDate = fromDate.plusDays(30);
}
responses.add(response.get());
}
This piece of code resides in my first controller and I've used #EnableAsync, created the task executor bean in config, made sure servlets support async but I keep losing these threads. Keep getting the "use request context" error but its already set in my web xml as well.
I'm just wondering if I'm approaching this problem incorrectly where I should be doing async somewhere else in this flow or what. Should I expect to lose threads with this architecture?
Related
Here I've three subflows and out of which one is HTTP outbound call. I want that HTTP call should try to get response till a mentioned time. If times out then the main flow should break and it should show a Error message in Json format as output.
Below is the code -
#Bean
public IntegrationFlow flow() {
return flow ->
flow.handle(validatorService, "validateRequest")
.split()
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.scatterGather(
scatterer ->
scatterer
.applySequence(true)
.recipientFlow(flow1())
.recipientFlow(
f ->
f.gateway(
flow2(), gateway -> gateway.replyTimeout(3000L).errorChannel("errorChannel")))
.recipientFlow(flow3()),
gatherer ->
gatherer
.releaseLockBeforeSend(true)
.releaseStrategy(group -> group.size() == 2))
.aggregate(someMethod1())
.to(someMethod2());
}
private IntegrationFlow returnError() {
return IntegrationFlows.from("errorChannel").handle(System.out::println).get();
}
I've added the errorChannel but how do I send a customized message to the user?
See documentation for error handling in the messaging gateway: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#gateway-no-response.
Consider to add an errorChannel() along side with that replyTimeout() on the gateway definition to build an error reply you'd like. However you also may consider to add something like a request timeout for the RestTemplate you use for that HTTP call to prevent the long wait for HTTP response.
UPDATE
First of all you need to understand that request-reply is a bit of block-n-then-wait approach. We send a request and if the process consuming that message is blocking - performed immediately in a thread producing the message, then we don't reach "wait" part until that process lets go. In most cases (by default) a DirectChannel is used, so it is blocked because it is performed in the calling thread. This process might be your HTTP call which is also request-response pattern. So, we don't reach that "wait" part until this HTTP call returns, or timeout, or fail with error. Only after that a replyTimeout takes its effect to wait for the reply from the underlying process. This can be changes if an input channel of that process is not direct. See an ExecutorChannel or QueueChannel. This way a sending part exits immediately because there is nothing to block it and it goes to the "wait" part to observe a CountDownLatch.
So, you need to think again if that replyTimeout() option is appropriate for your or not. Perhaps the mentioned requestTimeout for the RestTemplate is better option for you, than rework your flow to the async solution to leverage that replyTimeout() feature. Again: see the documentation I've mentioned about that replyTimeout feature.
The error handling is described here: https://docs.spring.io/spring-integration/docs/current/reference/html/error-handling.html#error-handling.
It is really not recommended to rely on the global errorChannel bean. This is one which is used everywhere in async processes where there is no an explicit error channel configured.
You said in your question "send a customized message to the user", but your error handling flow is one-way - System.out::println. If you want to return anything from the error handling flow, the endpoint must be replying one, e.g.:
.handle((p, h) -> "The error during HTTP call: " + p)
Also see if you declare that returnError() correctly. It really cannot be just plain private method. The IntegrationFlow must be declared as a bean this or other way to initiate wiring process for endpoints and channels. Right now that one is just a plain, unused private method. The framework does not see that method to do anything. See basics of the Java DSL in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl
I am currently on a Project that builds Microservices, and are trying to move from the more traditional Spring Boot RestClient to Reactive Stack using Netty and WebClient as the HTTP Client in order to connect to backend systems.
This is going well for backends with REST APIs, however I'm still having some difficulties implementing WebClient to services that connect to SOAP backends and Oracle databases, which still uses traditional JDBC.
I managed to find some workaround online regarding JDBC calls that make use of parallel schedulers to publish the result of the blocking JDBC call:
//the method that is called by #Service
#Override
public Mono<TransactionManagerModel> checkTransaction(String transactionId, String channel, String msisdn) {
return asyncCallable(() -> checkTransactionDB(transactionId, channel, msisdn))
.onErrorResume(error -> Mono.error(error));
}
...
//the actual JDBC call
private TransactionManagerModel checkTransactionDB(String transactionId, String channel, String msisdn) {
...
List<TransactionManagerModel> result =
jdbcTemplate.query(CHECK_TRANSACTION, paramMap, new BeanPropertyRowMapper<>(TransactionManagerModel.class));
...
}
//Generic async callable
private <T> Mono<T> asyncCallable(Callable<T> callable) {
return Mono.fromCallable(callable).subscribeOn(Schedulers.parallel()).publishOn(transactionManagerJdbcScheduler);
}
and I think this works quite well.
While for SOAP calls, what I did was encapsulating the SOAP call in a Mono while the SOAP call itself is using a CloseableHttpClient which is obviously a blocking HTTP Client.
//The method that is being 'reactive'
public Mono<OfferRs> addOffer(String transactionId, String channel, String serviceId, OfferRq request) {
...
OfferRs result = adapter.addOffer(transactionId, channel, generateRequest(request));
...
}
//The SOAP adapter that uses blocking HTTP Client
public OfferRs addOffer(String transactionId, String channel, JAXBElement<OfferRq> request) {
...
response = (OfferRs) getWebServiceTemplate().marshalSendAndReceive(url, request, webServiceMessage -> {
try {
SoapHeader soapHeader = ((SoapMessage) webServiceMessage).getSoapHeader();
ObjectFactory headerFactory = new ObjectFactory();
AuthenticationHeader authHeader = headerFactory.createAuthenticationHeader();
authHeader.setUserName(username);
authHeader.setPassWord(password);
JAXBContext headerContext = JAXBContext.newInstance(AuthenticationHeader.class);
Marshaller marshaller = headerContext.createMarshaller();
marshaller.marshal(authHeader, soapHeader.getResult());
} catch (Exception ex) {
log.error("Failed to marshall SOAP Header!", ex);
}
});
return response;
...
}
My question is: Does this implementation for SOAP calls "reactive" enough that I won't have to worry about some calls being blocked in some part of the microservice? I have already implemented reactive stack - calling a block() explicitly will throw an exception as it's not permitted if using Netty.
Or should I adapt the use of parallel Schedulers in SOAP calls as well?
After some discussions i'll write an answer.
Reactor documentation states that you should place blocking calls on their own schedulers. Thats basically to keep the non-blocking part of reactor going, and if something comes in that blocks, then reactor will fallback to traditional servlet behaviour which means assigning one thread to each request.
Reactor has very good documentation about schedulers their types etc.
But short:
onSubscribe
When someone subscribes, reactor will go into something called the assembly phase which means it will basically from the subscribe point start calling the operators backwards upstream until it finds a producer of data (for example a database, or another service etc). If it finds a onSubscribe-operator somewhere during this phase it will place this entire chain on its own defined Scheduler. So one good thing to know is that placement of the onSubscribe does not really matter, as long as it is found during the assembly phase the entire chain will be affected.
Example usage could be:
We have blocking calls to a database, slow calls using a blocking rest client, reading a file from the system in a blocking manor etc.
onPublish
if you have onPublish somewhere in the chain during the assembly phase the chain will know that where it is placed the chain will switch from the default scheduler to the designated scheduler at that specific point. So onPublish placement DOES matter. As it will switch at where it is placed. This operator is more to control that you want to place something on a specific scheduler at specific point in the code.
Examples usage could be:
You are doing some heavy blocking cpu calculations at a specific point, you could switch to a Scheduler.parallell() that will guarantee that all calculations will be placed on separate cores do do heavy cpu work, and when you are done you could switch back to the default scheduler.
Above example
Your soap calls should be placed on its own Scheduler if they are blocking and i think onSubscribe will be enough with a usage of a Schedulers.elasticBound() will be fine to get traditional servlet behaviour. If you feel like you are scared of having every blocking call on the same Scheduler, you could pass in the Scheduler in the asyncCallable function and split up calls to use different Schedulers.
I am confused about how an infinite loop of feign calls might behave.
An example:
Assume I have 2 APIs, A & B.
if I call API A, which in turn calls API B via a feign HTTP call, which in turn calls API A again via feign, will it recognize this and break the call chain?
Quick flowchart of calls:
A -> B -> A -> B ... Repeat infinitely?
I have not tried this code, it is just an idea。
But I am assuming that spring-cloud-starter-feign will provide some methods to resolve this problem? Is this assumption correct?
#PostMapping(RestJsonPath.API_A)
ResponseEntity<byte[]> apiA();
#PostMapping(RestJsonPath.API_B)
ResponseEntity<byte[]> apiB();
Will it execute until it times out or hystrix will stop it?
TL;DR:
Feign will keep the connection open on the initial request from A to B until the pre-configured timeout kicks in. At this point, Feign will time out the request and if you have specified a Hystrix fallback, Spring will use your Hystrix fallback as the response.
Explanation:
spring-boot-starter-feign provides an abstraction layer for writing the HTTP request code. It will not handle potential loops or cycles in your code.
Here is an example spring boot feign client from their tutorials website for demonstration:
#FeignClient(value = "jplaceholder",
url = "https://jsonplaceholder.typicode.com/",
configuration = ClientConfiguration.class,
fallback = JSONPlaceHolderFallback.class)
public interface JSONPlaceHolderClient {
#RequestMapping(method = RequestMethod.GET, value = "/posts")
List<Post> getPosts();
#RequestMapping(method = RequestMethod.GET, value = "/posts/{postId}", produces = "application/json")
Post getPostById(#PathVariable("postId") Long postId);
}
Notice first that this is an interface - all the code is auto generated by Spring at startup time, and that code will make RESTful requests to the urls configured via the annotations. For instance, the 2nd request allows us to pass in a path variable, which Spring will ensure makes it on the URL path of the outbound request.
The important thing to stress here is that this interface is only responsible for the HTTP calls, not any potential loops. Logic using this interface (which I can inject to any other Spring Bean as I would any other Spring Bean), is up to you the developer.
Github repo where this example came from.
Spring Boot Docs on spring-boot-starter-openfeign.
Hope this helps you understand the purpose of the openfeign project, and helps you understand that it's up to you to deal with cycles and infinite loops in your application code.
As for Hystrix, that framework comes in to play (if it is enabled) only if one of these generated HTTP requests fails, whether it's a timeout, 4xx error, 5xx error, or a response deserialization error. You configure Hystrix, as a sensible default or fallback for when the HTTP request fails.
This is a decent tutorial on Hystrix.
Some points to call out is that a Hystrix fallback must implement your Feign client interface, and you must specify this class as your Hysterix fallback in the #FeignClient annotation. Spring and Hystrix will call your Hystrix class automatically if a Feign request fails.
I have a question regarding Spring Webflux. I wanted to create a reactive endpoint that consumes content type text/event-stream. Not produce but consume. One of our services needs to send a lot of small objects to another one and we thought that streaming it this way might be a good solution.
#PostMapping(value = "/consumeStream", consumes = MediaType.TEXT_EVENT_STREAM_VALUE)
public Mono<Void> serve(#RequestBody Flux<String> data) {
return data.doOnNext(s -> System.out.println("MessageReceived")).then();
}
I am trying to use Spring WebClient to establish a connection to the endpoint and stream data to it. For example using code:
WebClient.builder().baseUrl("http://localhost:8080")
.clientConnector(new ReactorClientHttpConnector())
.build()
.post()
.uri("/test/serve")
.contentType(MediaType.TEXT_EVENT_STREAM)
.body(BodyInserters.fromPublisher(flux, String.class))
.exchange()
.block();
The flux is a stream that produces a single value every 1 sec.
The problem I have is that the WebClient fully reads the publisher and then sends the data as a whole and not streams it one by one.
Is there anything I can do to do this using this client or any other ? I do not want to go the websockets way.
SSE standard does not allow POST. There is no way to specify method even in browser API https://www.w3.org/TR/eventsource/
Server Side Events as name states are designed for delivering events from the server to the client.
UPDATE: I upgraded the code to Java 8 without too much of a hassle. So I would like answers tied to Spring 4/Java 8.
I am working on a task to fix performance issues (Tomcat max thread count of 200 reached at a request rate of just 400/s, request latencies building up periodically, etc) in a Tomcat/Spring 4.2.4/Java 8 web mvc application.
It is a typical web application which looks up Mysql via Hibernate for small but frequent things like user info per request, then does actual data POST/GET to another web service via RestTemplate.
The code is in Java 7 style as I just migrated to Java 8, but no new code has been written in that style yet. (I am also back using Spring after ages, so not sure what would be best).
As expected in a normal such application, the Service layer calls other services for info, and then also passes that along to a call to the DAO. So I have some dependent callbacks here.
Setup
#EnableAsync is set
The flow of our Http requests goes from Controller -> Service -> DAO -> REST or Hibernate
Sample flow
Say Controller receives POST request R and expects a DeferredResult
Controller calls entityXService.save()
EntityXService calls userService.findUser(id)
UserService calls UserDAO.findUser(id) which in turn talks to Hibernate
UserService returns a Spring ListenableFuture to the caller
EntityXService awaits the user info (using callback) in and calls EntityXDAO.save(user, R)
EntityXDAO calls AsyncRestTemplate.postForEntity(user, R)
EntityXDAO receives DeferredResult> which is our data abstraction for the response.
EntityXDAO processes the response and converts to EntityXDTO
Eventually somehow the DeferredResult is sent back through the same chain as a response.
I am getting lost at how in step 3, EntityXService asynchronously calls UserService.findUser(id) with an onSuccess callback to EntityXDAO.save(user, R). However, EntityXDAO.save(user, R) also now returns a DeferredResult from the AsyncRestTemplate.
Questions:
Is using DeferredResult a good way to get concurrency going in this application?
Is using Guava's ListenableFuture or Java 8 CompletableFuture going to help make it better in anyway, rather than using DeferredResult?
My BIGGEST question and confusion is how to arrange the DeferredResult from one service lookup to be used by another, and then finally set a DeferredResult of a completely different return type for the final response?
Is there an example of how to chain such callbacks and what is the recommended way to build such a flow? If this sounds completely wrong, is Java 7 going to be the right choice for this?
Thanks in advance!