Js :
$('#loaderImage').show();
$http.get('/utilities/longProcess')
.success(function(data, status, headers, config) {
console.log('Completed');
$scope.sampleJSON = data.pmdStructureWrapper;
$scope.sampleJSONDuplicates = data.pmdDuplicates;
$scope.$watch('sampleJSON', setTimeout(function() {
$('.panel-body li').each(function() {
if ($.trim($(this).text()) === "") {
$(this).hide();
}
});
}, 1000));
$('#loaderImage').hide();
})
.error(function(data, status, header, config) {
});
Controller :
#RequestMapping("/utilities/longProcess")
public DeferredResult<String> async(HttpServletResponse response, HttpServletRequest request) {
DeferredResult<String> dr = new DeferredResult<>();
CompletableFuture.supplyAsync(() -> {
return callURL(response, request);
}, ex).thenAccept((String message) -> {
dr.setResult(message);
});
return dr;
}
private String callURL(HttpServletResponse response, HttpServletRequest request){
PMDMainWrapper pmdMainWrapper = new PMDMainWrapper();
Map<String, PMDStructureWrapper> codeReviewByClass = new HashMap<>();
String partnerURL = this.partnerURL;
String toolingURL = this.toolingURL;
Cookie[] cookies = request.getCookies();
List<PMDStructure> violationStructure = null;
try {
violationStructure = metadataLoginUtil.startReviewer(partnerURL, toolingURL, cookies);
} catch (Exception e) {
e.printStackTrace();
}
PMDStructureWrapper pmdStructureWrapper = null;
List<PMDStructure> pmdStructureList = null;
List<PMDStructure> pmdDuplicatesList = new ArrayList<>();
int size = violationStructure.size();
long start = System.currentTimeMillis();
for (int i = 0; i < size; i++) {
if (codeReviewByClass.containsKey(violationStructure.get(i).getName())) {
PMDStructureWrapper pmdStructureWrapper1 = codeReviewByClass.get(violationStructure.get(i).getName());
List<PMDStructure> pmdStructures = pmdStructureWrapper1.getPmdStructures();
pmdStructures.add(violationStructure.get(i));
pmdStructureWrapper1.setPmdStructures(pmdStructures);
} else {
pmdStructureList = new ArrayList<>();
pmdStructureList.add(violationStructure.get(i));
pmdStructureWrapper = new PMDStructureWrapper();
pmdStructureWrapper.setPmdStructures(pmdStructureList);
codeReviewByClass.put(violationStructure.get(i).getName(), pmdStructureWrapper);
}
}
long stop = System.currentTimeMillis();
LOGGER.info("Total Time Taken from PMDController "+ String.valueOf(stop-start));
if (!codeReviewByClass.isEmpty()) {
pmdMainWrapper.setPmdStructureWrapper(codeReviewByClass);
pmdMainWrapper.setPmdDuplicates(pmdDuplicatesList);
Gson gson = new GsonBuilder().create();
return gson.toJson(pmdMainWrapper);
}
return "";
}
I am going with async process because when the app is hosted in heroku, it takes almost 120 seconds to return the result to the page, but as per heroku documentation the rest api should return within 30 seconds otherwise it terminates the process,
But still after implementing the above logic I am seeing the timeout error.
I have kept a console log in the javascript console.log('Completed'); but that gets printed only when it returns the result from callURL method which is taking more than 120 seconds to return.
What i wanted to implement is when the UI sends a request, it should keep receiving a message which says still loading, so that the request does not gets timeedout?
CompletableFuture.supplyAsync() runs the specified supplier in a different thread (one from the ForkJoinThreadPool for default). thenAccept() method only runs after the previous execution returns. So, it won't return fast in your case, you're just calling the long running call in a different thread.
Instead, define a common object, which acts as a cache (such as HttpSession), and make the CompletableFuture return the object stored there. And execute callURL()only when the cache is empty:
#RequestMapping("/utilities/longProcess")
public CompletableFuture<String> async(HttpServletResponse response, HttpServletRequest request) {
HttpSession session = request.getSession();
return CompletableFuture.supplyAsync(() -> session.getAttribute("CACHED_RESULT"))
.thenComposeAsync(obj -> {
if (obj == null) {
CompletableFuture.supplyAsync(() -> callUrl(request, response))
.thenAccept(result -> session.setAttribute("CACHED_RESULT", result));
return CompletableFuture.completedFuture("not ready yet");
}
return CompletableFuture.completedFuture(obj.toString());
});
You can also add a timestamp to see when you did the last call to callUrl() and don't call callUrl() again when you've made a call but not received the answer yet.
Related
First time trying Webclient and a bit lost. I'm trying to call an API potentially up to 20 times, and I want the calls to happen in parallel and process the response objects as they come in. Then returning the response. I have it almost working, the API is properly iterating through all the responses as they come in and building my response object. However it's not blocking, meaning when the response is finished building, my API has already returned an empty response such as: {}
API:
public GetHistoricalRes getHistoricalDaily(GetHistoricalReq getHistoricalReq) {
GetHistoricalRes historicalDailyQuotesRes = new GetHistoricalRes();
List<Mono<GenHistoricalRes>> genHistoricalDailyQuotes = new ArrayList<>();
for (String ticker : getHistoricalReq.getTickers()) {
genHistoricalDailyQuotes.add(MrMarketClient.getHistoricalDailyQuotes(ticker, getHistoricalReq.getTo(), getHistoricalReq.getFrom()));
}
Flux.merge(genHistoricalDailyQuotes).subscribe((genHistoricalRes) -> {
historicalDailyQuotesRes.getQuotes().put(genHistoricalRes.getSymbol(), genHistoricalRes);
});
return historicalDailyQuotesRes;
}
Webclient:
public Mono<GenHistoricalRes> getHistoricalDailyQuotes(String ticker, String to, String from) {
String historicalPricePath = "/historical-price-full/" + ticker;
return this.getClient()
.get()
.uri(builder -> builder
.path(historicalPricePath)
.queryParam("apikey", apiKey)
.queryParam("from", from)
.queryParam("to", to)
.build())
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.accept(MediaType.APPLICATION_JSON)
.exchangeToMono(
response -> {
if (response.statusCode().equals(HttpStatus.OK)) {
return response.bodyToMono(GenHistoricalRes.class)
.log();
} else {
return response.createException()
.flatMap(Mono::error);
}
});
}
I am using a Library called milo, it is programmed with the Java 8 attribute like the use of CompletableFuture.
And now I want to get data from REST using OkHttp.But I don't know how to implement this with CompletableFuture.
Below is my code
#Override
public void run(OpcUaClient client, CompletableFuture<OpcUaClient> future) throws Exception {
// synchronous connect
client.connect().get();
List<NodeId> nodeIds = ImmutableList.of(new NodeId(2, "HelloWorld/ScalarTypes/Int32"));
OkHttpClient okHttpClient = new OkHttpClient();
Request.Builder requestBuilder = new Request.Builder().url("http://localhost:8080/greeting");
for (int i = 0; i < 10; i++) {
Request request = requestBuilder.build();
Call call= okHttpClient.newCall(request);
final GreetingModel[] greetingModel = {new GreetingModel()};
call.enqueue(new Callback() {
#Override
public void onFailure(#NotNull Call call, #NotNull IOException e) {
logger.error("Writing is wrong");
}
#Override
public void onResponse(#NotNull Call call, #NotNull Response response) throws IOException {
String json = response.body().string();
logger.info("Connecting is good");
greetingModel[0] = JSON.parseObject(json, GreetingModel.class);
}
});
Variant v = new Variant(greetingModel[0].getId());
// don't write status or timestamps
DataValue dv = new DataValue(v, null, null);
// write asynchronously....
CompletableFuture<List<StatusCode>> f =
client.writeValues(nodeIds, ImmutableList.of(dv));
// ...but block for the results so we write in order
List<StatusCode> statusCodes = f.get();
StatusCode status = statusCodes.get(0);
if (status.isGood()) {
logger.info("Wrote '{}' to nodeId={}", v, nodeIds.get(0));
}
}
future.complete(client);
}
}
And the figure shows the result.The writing operation is ahead the data acquisition from okhttp.
result
I'm trying to pull data from a REST-style web-service which delivers content in pages.
The only way I can know that I've reached the end is when I ask for a page and there are no results. I'd like to terminate the stream at that time.
I've written the following Java code. The first function pulls a single page from the web-service and returns it as a stream. The second function flatmaps the streams together into a single stream.
public Stream<ApplicationResponse> getApplications(String token, RestTemplate rt, Integer page, Integer pageSize) {
HttpEntity<String> entity = new HttpEntity<>("parameters", getHeaders(token));
String url = String.format("%s?PageIndex=%s&PageSize=%s", endpoint, page, pageSize);
ResponseEntity<ApplicationCollection> ar = rt.exchange(url, HttpMethod.GET, entity, ApplicationCollection.class);
ApplicationResponse[] res = Objects.requireNonNull(ar.getBody()).getData();
// Do something here when res is empty, so that the stream ends
return Arrays.stream(res);
}
public Stream<ApplicationResponse> getApplications(String token, RestTemplate rt) {
// This function does the right thing, exept when we run out of data!
return IntStream.iterate(1, i -> i + 1).mapToObj(i -> getApplications(token, rt, i, 500)).flatMap(Function.identity());
}
The problem is, how do I allow this to end?
If I were writing this in Python I'd raise a StopIteration exception at the point where I know there's nothing left to put onto the stream. Is there something similar I can do?
The best thing I could think of was use a null, or raise an exception to signify the end of data and then wrap up the stream into an Iterator that knows to stop when that signal is received. But is there anything more idiomatic that I can do?
After comments from Holger, I gave it a shot and tried Spliterator instead of Iterator. It is indeed simpler, as next and hasNext are... kinda combined into tryAdvance? It is even short enough to just inline it into a util method, imo.
public static Stream<ApplicationResponse> getApplications(String token, RestTemplate rt)
{
return StreamSupport.stream(new AbstractSpliterator<ApplicationResponse[]>(Long.MAX_VALUE,
Spliterator.ORDERED
| Spliterator.IMMUTABLE)
{
private int page = 1;
#Override
public boolean tryAdvance(Consumer<? super ApplicationResponse[]> action)
{
HttpEntity<String> entity = new HttpEntity<>("parameters", getHeaders(token));
String url = String.format("%s?PageIndex=%s&PageSize=%s", endpoint, page, 500);
ResponseEntity<ApplicationCollection> ar = rt.exchange(url, HttpMethod.GET, entity,
ApplicationCollection.class);
ApplicationResponse[] res = Objects.requireNonNull(ar.getBody()).getData();
if (res.length == 0)
return false;
page++;
action.accept(res);
return true;
}
}, false).flatMap(Arrays::stream);
}
You could implement an Iterator and create a Stream of it:
public class ResponseIterator
implements Iterator<Stream<ApplicationResponse>>
{
private int page = 1;
private String token;
private RestTemplate rt;
private ApplicationResponse[] next;
private ResponseIterator(String token, RestTemplate rt)
{
this.token = token;
this.rt = rt;
}
public static Stream<ApplicationResponse> getApplications(String token, RestTemplate rt)
{
Iterable<Stream<ApplicationResponse>> iterable = () -> new ResponseIterator(token, rt);
return StreamSupport.stream(iterable.spliterator(), false).flatMap(Function.identity());
}
#Override
public boolean hasNext()
{
if (next == null)
{
next = getNext();
}
return next.length != 0;
}
#Override
public Stream<ApplicationResponse> next()
{
if (next == null)
{
next = getNext();
}
Stream<ApplicationResponse> nextStream = Arrays.stream(next);
next = getNext();
return nextStream;
}
private ApplicationResponse[] getNext()
{
HttpEntity<String> entity = new HttpEntity<>("parameters", getHeaders(token));
String url = String.format("%s?PageIndex=%s&PageSize=%s", endpoint, page, 500);
ResponseEntity<ApplicationCollection> ar = rt.exchange(url, HttpMethod.GET, entity,
ApplicationCollection.class);
ApplicationResponse[] res = Objects.requireNonNull(ar.getBody()).getData();
page++;
return res;
}
}
It will check whether the next response is empty in hasNext(), stopping the stream. Otherwise, it will stream and flatMap that response. I have hardwired pageSize, but you can easily make that a third input for the factory method ResponseIterator.getApplications().
Here is my controller. I used postman to test if it's working but I am getting an empty response. I used #EnableAsync in application configuration and #Async on the service. If I remove #Async on service layer it works but it doesn't run asynchronously.
#ApiOperation(value = "search person by passing search criteria event/title/role/host/is_current", response = ElasticSearchResultData.class)
#RequestMapping(value = "/async2/searchPerson", produces = "application/json", method = RequestMethod.POST)
public #ResponseBody CompletableFuture<ElasticSearchResultData> searchPersonAsync2(#RequestBody SearchCriteriaTo criteriaForDivNetFolderTo,
HttpServletRequest request, HttpServletResponse response){
LOGGER.info("searchPerson controller start");
SearchCriteria searchCriteria = criteriaForDivNetFolderTo.getSearchCriteria();
if (Util.isNull(searchCriteria))
throw new IllegalArgumentException("search criteria should not be null.");
try {
CompletableFuture<ElasticSearchResultData> searchPerson = cubService.searchPersonAsync2(criteriaForDivNetFolderTo);
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("search Person "+mapper.writeValueAsString(searchPerson));
return searchPerson;
} catch (Exception e) {
LOGGER.error("Exception in searchPersonAsync controller "+e.getMessage());
}
return null;
}
Service
#Async
#Override
public CompletableFuture<ElasticSearchResultData> searchPersonAsync2(SearchCriteriaTo searchCriteriaForDivNetFolderTo) {
Long start = System.currentTimeMillis();
LOGGER.info(":in searchPerson");
CompletableFuture<ElasticSearchResultData> completableFuture = new CompletableFuture<>();
ElasticSearchResultData searchResultData = null;
SearchCriteria searchCriteria = searchCriteriaForDivNetFolderTo.getSearchCriteria();
try {
LOGGER.info("************ Started searchPerson by criteria ************");
StringBuilder url = new StringBuilder();
url.append(equilarSearchEngineApiUrl)
.append(focusCompanySearchUrl)
.append("/")
.append("searchPerson")
.append("?view=").append(VIEW_ALL)
.append("&isProcessing=true");
LOGGER.debug("Calling equilar search engine for focused company search, URL : " + url);
LOGGER.info(searchCriteria.toString());
String output = null;
if (redisEnable != null && redisEnable) {
output = cacheDao.getDataFromRestApi(url.toString(), RequestMethod.POST.name(), searchCriteria);
} else {
output = Util.getDataFromRestApi(url.toString(), RequestMethod.POST.name(), searchCriteria);
}
if (!Util.isEmptyString(output)) {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
searchResultData = objectMapper.readValue(output,
objectMapper.getTypeFactory().constructType(ElasticSearchResultData.class));
}
List<PersonSearchDetails> newPersonDetails = new ArrayList<PersonSearchDetails>();
if (!Util.isNull(searchResultData) && !Util.isNullOrEmptyCollection(searchResultData.getPersonDetails())
&& !Util.isNullOrEmptyCollection(searchCriteriaForDivNetFolderTo.getNetworkFoldersData())) {
for (PersonSearchDetails personDetail : searchResultData.getPersonDetails()) {
String logoUrl = null;
if(!Util.isNull(searchCriteria.getTargetFolderId())){
List<DiversityNetworkFolderTo> filteredFolderTos = searchCriteriaForDivNetFolderTo
.getNetworkFoldersData()
.stream()
.filter(folder -> folder.getId()
.longValue() == searchCriteria
.getTargetFolderId())
.collect(Collectors.toList());
logoUrl = getLogoUrl(personDetail.getPersonId(),
filteredFolderTos);
} else {
logoUrl = getLogoUrl(personDetail.getPersonId(),
searchCriteriaForDivNetFolderTo.getNetworkFoldersData());
}
personDetail.setLogoUrl(logoUrl);
newPersonDetails.add(personDetail);
}
searchResultData.setPersonDetails(newPersonDetails);
}
completableFuture.complete(searchResultData);
return completableFuture;
} catch (Exception e) {
completableFuture.completeExceptionally(e);
LOGGER.error(
" ************** Error in proccessing searchPerson by criteria ************** " + e.getMessage());
}
Long end = System.currentTimeMillis();
LOGGER.info(TIME_DURATION+(end - start)+"ms");
return null;
}
It would be good to read more about async processing. javadocs are usually a great start!
If you really want to get the result from a Future method, you need to wait for it.
There is a method public T get() method in the CompletableFuture API to wait for wait for the result to be created and return the result once it's done.
If your job is to search a database for the result and then return it - you will still have to wait for it async is not much help in here. It would help you if you had to make multiple things at the same time, e.g. a call to DB, a web service and something else at the same time, then you can create an array of futures and wait for all of them to complete.
Or, let's say you're creating a POST method, so you can quickly validate the input and send to store to DB async while quickly returning the response to UI and hoping that your async method will be completed in another thread and not returning any errors to UI.
This is a great technique when you know what you're doing, but think if & when you really need it before using it.
The short way to "fix" this is:
CompletableFuture<ElasticSearchResultData> searchPerson = cubService.searchPersonAsync2(criteriaForDivNetFolderTo);
ElasticSearchResultData result = searchPerson.get();
ObjectMapper mapper = new ObjectMapper();
LOGGER.info("search Person "+mapper.writeValueAsString(result));
return result;
( and obviously change the method return signature )
Sometimes requests fail, and in case it's not due to client error (i.e.: not 4xx) then I'd like to retry to send the same request.
All my requests have a request id header, and when I send a certain request again (retry) I need to send the same id as the one used in the first place.
This sounds like a simple thing, but it turns out to be quite hard to accomplish with Retrofit 2, unless of course I'm missing something.
All of my requests are async, and so in the callback, in case I need to retry I'm doing this:
public void onResponse(final Call<T> call, final Response<T> response) {
if (response.isSuccessful()) {
handleResponse(response);
} else if (response.code() >= 400 && response.code() < 500) {
handleClientError(response);
} else {
call.clone().enqueue(this);
}
}
I also have an interceptor for adding headers to all requests:
new Interceptor() {
#Override
public Response intercept(Chain chain) throws IOException {
final Request request = chain.request();
final Request.Builder newRequestBuilder = request.newBuilder()
.addHeader("Header1-name", "Header1-value")
.addHeader("Header2-name", "Header2-value")
...
.addHeader("HeaderN-name", "HeaderN-value");
if (request.header("REQUEST-ID") == null) {
newRequestBuilder.addHeader("REQUEST-ID", UUID.randomUUID().toString());
}
return chain.proceed(newRequestBuilder.build());
}
};
I thought that since I'm cloning the Call then the (retry) request will have the headers of the previous one but that's not the case (probably because the Call is cloned and not the Request).
My problem is that I have no way of identifying the request or call in my Interceptor.intercept so I can't maintain a map of request/calls to id.
There's also no way of adding info to the calls/requests (as they are not generated by me and lack and setters for such a case).
I thought that maybe I can use the Request.tag method but again, I have no control of the object instance that is assigned there and the objects are different between requests.
And if we're already on this subject, what is this tag anyway? I can't find documentation about it.
Any idea how I can somehow pull this off?
Thanks
I took another approach to this, instead of using network interceptors and callbacks, I wrote a solution which uses a network interceptor:
class BackoffRetryInteceptor implements Interceptor {
private static final long RETRY_INITIAL_DELAY = 500;
private static final long RETRY_MAX_DELAY = 35000;
#Override
public Response intercept(final Chain chain) throws IOException {
Request request = chain.request();
final Headers.Builder headersBuilder = new Headers.Builder();
addHeaders(headersBuilder);
final Headers headers = headersBuilder.build();
long delay = RETRY_INITIAL_DELAY;
Response response = null;
IOException exception = null;
while (delay < RETRY_MAX_DELAY) {
exception = null;
request = request.newBuilder().headers(headers).build();
try {
response = chain.proceed(request);
if (response.isSuccessful() || response.code() != 500) {
return response;
}
} catch (IOException e) {
exception = e;
}
try {
Thread.sleep(delay);
delay *= 2;
} catch (InterruptedException e) {
delay = RETRY_MAX_DELAY;
}
}
if (exception != null) {
throw exception;
}
return response;
}
private static void addHeaders(final Headers.Builder headers) {
headers.add("Header1-name", "Header1-value")
.add("Header2-name", "Header2-value")
...
.add("HeaderN-name", "HeaderN-value")
.add("Request-Id", UUID.randomUUID().toString());
}
}
This seems to work well in my tests.
The main problem though is the blocking of the network threads.
If anyone can think up a better solution I'd love to hear it.