error 204 in a Google App Engine API in java - java

We have an API with Googe App Engine. The API consist on a search engine, when a user requests a productID the API returns a json with a group of other productIDs (with a specific criteria). This is the current configuration:
<instance-class>F4_1G</instance-class>
<automatic-scaling>
<min-idle-instances>3</min-idle-instances>
<max-idle-instances>automatic</max-idle-instances>
<min-pending-latency>automatic</min-pending-latency>
<max-pending-latency>automatic</max-pending-latency>
</automatic-scaling>
We use app_engine_release=1.9.23
The process does as follows. We have two calls to datastore and a call with urlfetch (to an external API).
The problem consist on that from time to time we receive en error 204 with this trace:
ms=594 cpu_ms=0 exit_code=204 app_engine_release=1.9.23
A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. (Error code 204)
This is what we got in the client:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "backendError",
"message": ""
}
],
"code": 503,
"message": ""
}
}
We changed the number of resident instances from 3 to 7 and we got the same error. Also the errors occur in the same instances. We see 4 errors within a very small amount of time.
We found that the problem was with the urlfecth call. If we put a high timeout, then it returns a lot of errors.
any idea why this is happening???

I believe I have found the problem. The problem was related to the urlfetch call. I did many tests until I isolate the problem. When i did calls only to datastore everything worked as expected. However when I added the urlfetch call it produced the 204 errors. It happened always so I believe that could be a bug.
What I did to get rid of the error was to remove the cloud end point from Google and use a basic servlet. I found that mixing the servlet with the urlfetch call we don't get the error, therefore the problem might not be only related to urlfetch but a combination of urlfetch and Google cloud end point.

Related

Google API throwing Rate Limit Exceeded 403 exception for CDN cache invalidation

We are caching our pages and content in Google CDN.
Google has provided us an API to invalidate cache for a particular page/path.
Our website is built using a CMS called AEM(Adobe Experience Manager), this CMS supports constant page/content updation eg. we may update what is shown on our https://our-webpage/homepage.html twice in a day. When such an operation is done there is a need to flush the cache at the Google CDN for "homepage.html".
Such kind of an activity is very common, meaning we need to send several cache invalidation requests(thousands) in a day.
We are sending so many invalidation requests that after sometime we get this error
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "usageLimits",
"message" : "Rate Limit Exceeded",
"reason" : "rateLimitExceeded"
} ],
"message" : "Rate Limit Exceeded"
}
How do we solve this?
I've read this page https://developers.google.com/drive/api/v3/handle-errors
It mentions batching requests.
How do I send invalidation requests for multiple pages to Google CDN in one batch?
Or Is it possible to increase or set the API flush call limit to a higher number per day.
Right now if we have 100 pages to flush from CDN we make the below HTTP call 100 times(one for each page).
CacheInvalidationRule requestBody = new CacheInvalidationRule();
// IMPORTANT
requestBody.setPath(pagePath);
Compute computeService = createComputeService();
Compute.UrlMaps.InvalidateCache request =
computeService.urlMaps().invalidateCache(projectName, urlMap, requestBody);
Operation response = request.execute();
if(LOG.isDebugEnabled()) {
LOG.debug("Google CDN Flush Response JSON :: {}",response);
}
LOG.info("Google CDN Flush Invalidation for Page Path {}:: Response Status Code:: {}",pagePath,response.getStatus());
We set the page to flush in requestBody.setPath(pagePath);
Can we do this in a more efficient way, like sending all pages as an array or strings in one HTTP call?
Like :
requestBody.setPath(pagePath);
Where
pagePath="['/homepage.html','/videos.html','/sports/basketball.html','/tickets.html','/faqs.html']";
Rate Limit Exceeded is flood protection you are going to fast slow down your requests.
Implement exponential back off for retrying the requests.
You can periodically retry a failed request over an increasing amount of time to handle errors related to rate limits, network volume, or response time. For example, you might retry a failed request after one second, then after two seconds, and then after four seconds. This method is called exponential backoff and it is used to improve bandwidth usage and maximize throughput of requests in concurrent environments. When using exponential backoff, consider the following:
Start retry periods at least one second after the error.
If the attempted request introduces a change, such as a create request, add a check to make sure nothing is duplicated. Some errors, such as invalid authorization credentials or "file not found" errors, aren’t resolved by retrying the request.
Batching wont help much your still going to be limited to the same issues with the rate limit i have even seen rate limit errors when batching.
Kindly note your link is from the Google drive api im not even sure Cloud CDN supports batching of requests.
Wouldn't it be better to aggregate several updates on AEM side and send only one request to the CDN after a max period of time and / or a max amount of changes?
I mean, if you change your homepage on AEM, usually you would invalidate all the subpages as well (navigation might change, ...).
Isn't there a possibility for the google cdn to invalidate a tree or subtree?
At least that's what I would extract from this documentation https://cloud.google.com/sdk/gcloud/reference/compute/url-maps/invalidate-cdn-cache

Error in backend of REST API: "INFO: The connection was broken. It was probably closed by the client. Reason: Closed"

So I am trying out a simple full stack project of my own that involves a java backend implementation of a REST API, for which I am using the org.restlet.com framework/package and jetty as the server.
Whilst I was testing my API using Postman I noticed something wierd: Every time I started the server only the first POST/PUT/DELETE HTTP Request would get an answer, while the next ones would not receive one and on the console this error message would appear:
/* Timestamp-not-important */ org.restlet.engine.adapter.ServerAdapter commit
INFO: The connection was broken. It was probably closed by the client.
Reason: Closed
The GET HTTP Requests however do not share that problem.
I said "Fair enough, probably it's postman's fault".. after all the request made it to the server and their effects were applied. However, now that I am building the front-end this problem blocks the server's response: instead of a JSON object I get an undefined (edit: actually I get 204 No Content) on the front-end and the same "INFO" on the back-end for every POST/PUT/DELETE after the first one.
I have no idea what it is or what I am doing wrong. It has to be the backend's problem, right? But what should I look for?
Nevermind, it was the stupidest thing ever. I tried to be "smart" about returning the same Representation object (with only a 'success' JSON field) on multiple occasions by making one instance on a static final field of a class. Turns out a new instance must be returned each time.

Google calendar API v3 returns (503 backendError) on channel creation

I'm using Calendar API client library for Java to watch channels and get push notifications. Sometimes, when I try to create a channel on Google, it returns the following error response:
{
"code" : 503,
"errors" : [ {
"domain" : "global",
"message" : "Failed to create channel",
"reason" : "backendError"
} ],
"message" : "Failed to create channel"
}
There is nothing about handling this error in the documentation:
https://developers.google.com/google-apps/calendar/v3/errors
However, I guess it could happen due to high number of request are sent to Google and it refuses the connection. Maybe, here I need to perform retry after some time.
The question is what's the correct way to handle this error and to start watching the desired channel?
The route cause of this issue is probably a heavy network traffic.
Google calendar API suggests the solution of exponential backoff implementation for that kind of errors.
An exponential backoff is an algorithm that repeatedly attempts to execute some action until that action has succeeded, waiting an amount of time that grows exponentially between each attempt, up to some maximum number of attempts.
You could find implementation ideas here.

Java exception when creating multiple permissions on Google Drive documents

In our application we need to share multiple files to multiple users with Google Drive api.
We use batching provided by the java client library of the Google Drive api.
This runs already in the production but we get a lot of unclear exceptions from the Google Drive api:
Internal Error. User message: "An internal error has occurred which prevented the sharing of these item(s): "
We handle the exceptions and retry with an exponential backoff, but these errors cause big delays in the flow and usability of this application.
What is the reason these exceptions occur? How to avoid those?
It would be very helpful if we knew what is going wrong when these exceptions occur, so we can avoid it.
Some extra information:
Each batch contains 100 permission operations on different files.
Every minute a batch operation is called.
The code:
String fileId = "1sTWaJ_j7PkjzaBWtNc3IzovK5hQf21FbOw9yLeeLPNQ";
JsonBatchCallback<Permission> callback = new JsonBatchCallback<Permission>()
{
#Override
public void onFailure(GoogleJsonError e, HttpHeaders responseHeaders)
throws IOException {
System.err.println(e.getMessage());
}
#Override
public void onSuccess(Permission permission, HttpHeaders responseHeaders) throws IOException {
System.out.println("Permission ID: " + permission.getId());
}
};
BatchRequest batch = driveService.batch();
for(String email : emails) {
Permission userPermission = new Permission().setType("user").setRole("reader").setEmailAddress(email);
driveService.permissions().create(fileId, userPermission).setSendNotificationEmail(false).setFields("id").queue(batch, callback);
}
batch.execute();
the variable emails contains 100 email strings.
{
"code" : 500,
"errors" : [ {
"domain" : "global",
"message" : "Internal Error. User message: "An internal error has occurred which prevented the sharing of these item(s): fileame"",
"reason" : "internalError"
} ],
"message" : "Internal Error. User message: "An internal error has occurred which prevented the sharing of these item(s): filename""
}
Is basically flood protection. The normal recommendation is to Implementing exponential backoff
Exponential backoff is a standard error handling strategy for network
applications in which the client periodically retries a failed request
over an increasing amount of time. If a high volume of requests or
heavy network traffic causes the server to return errors, exponential
backoff may be a good strategy for handling those errors. Conversely,
it is not a relevant strategy for dealing with errors unrelated to
rate-limiting, network volume or response times, such as invalid
authorization credentials or file not found errors.
Used properly, exponential backoff increases the efficiency of
bandwidth usage, reduces the number of requests required to get a
successful response, and maximizes the throughput of requests in
concurrent environments.
Now here is were you are going to say but I am batching I cant do that. Yup batching falls under the same flood protection. Your batch is flooding the server. Yes I know it says you can send 100 requests and you probably can if the requests take enough time in between each request not to qualify as flooding but yours apparently doesn't.
My recommendation is you try cutting it down to say 10 request, and slowly stepping it up. Your not saving yourself anything using batching the quota usage will be the same as if you didn't batch it. You cant go any faster than the flood protection allows.

How to continue on client when heavy server computation is done

This might be a simple problem, but I can't seem to find a good solution right now.
I've got:
OldApp - a Java application started from the command line (no web front here)
NewApp - a Java application with a REST api behind Apache
I want OldApp to call NewApp through its REST api and when NewApp is done, OldApp should continue.
My problem is that NewApp is doing a lot of stuff that might take a lot of time which in some cases causes a timeout in Apache, and then sends a 502 error to OldApp. The computations continue in NewApp, but OldApp does not know when NewApp is done.
One solution I thought of is fork a thread in NewApp and store some kind of ID for the API request, and return it to OldApp. Then OldApp could poll NewApp to see if the thread is done, and if so - continue. Otherwise - keep polling.
Are there any good design patterns for something like this? Am I complicating things? Any tips on how to think?
If NewApp is taking a long time, it should immediately return a 202 Accepted. The response should contain a Location header indicating where the user can go to look up the result when it's done, and an estimate of when the request will be done.
OldApp should wait until the estimate time is reached, then submit a new GET call to the location. The response from that GET will either be the expected data, or an entity with a new estimated time. OldApp can then try again at the later time, repeating until the expected data is available.
So The conversation might look like:
POST /widgets
response:
202 Accepted
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<whenever>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<wheneverElse>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"myProperty": "myValue",
...
}
Yes, that's exactly what people are doing with REST now. Because there no way to connect from server to client, client just polls very often. There also some improved method called "long polling", when connection between client and server has big timeout, and server send information back to connected client when it becomes available.
The question is on java and servlets ... So I would suggest looking at Servlet 3.0 asynchronous support.
Talking from a design perspective, you would need to return a 202 accepted with an Id and an URL to the job. The oldApp needs to check for the result of the operation using the URL.
The thread that you fork on the server needs to implement the Callable interface. I would also recommend using a thread pool for this. The GET url for the Job that was forked can check the Future object status and return it to the user.

Categories