So, for instance, I am executing multiple requests like this
for (final ParentReference previousParent : previousParents) {
driveService.parents().delete(fileResourceId, previousParent.getId()).execute();
}
Is there a way to execute them in a batch (I mean not like adding to a Collection and then calling execute on each element)?
If there is, will the batch request form some kind of single request to reduce number of API calls and server requests?
I've seen there is a BatchRequest class, but I can't figure if that's what I need and how to use it assuming I have access to com.google.api.services.drive.Drive service.
The BatchRequest takes two arguments and I can't seem to find how to even build it properly, if that's what I need in first place.
Yes, using a BatchRequest will allow you to perform multiple API requests within a simple HTTP request. More information on how to use batching in the Google APIs Java client library is available here: https://code.google.com/p/google-api-java-client/wiki/Batch
Related
I have a frontend web tool which interacts with REST API written in danjgo. Each API calls take long time to process the call and is also CPU/GPU intensive. So I am planning to run only 1 call at a time and putting rest of the calls in queue. Now I am not sure if Celery with redis can be helpful here or should I stick with job queue approach at the java side.
So, the tool would be used by multiple users and and so each user would have their jobs. So, I need to put the jobs in queue so that they can be processed one by one asynchronously. Would Celery be helpful here?
I am trying to access all the jobs of a project from rundeck in order to find the failed jobs and restart them. Also I want to check if all the nodes are up or not using Java.
When I try to create an instance for the RundeckClient using the org.rundeck.api.RundeckClient I am getting following error:
'RundeckClient(java.lang.String)' is not public in 'org.rundeck.api.RundeckClient'. Cannot be accessed from outside package
Rundeck has a rest API you can read about it from https://docs.rundeck.com/docs/api/#index-links
Create a restful web-service client and invoke the web-service end points.
Refer the Jobs section on the API documentation and use the appropriate api according to your need. You will need to generate token from the Rundeck and use it in your API calls.
I have a JRuby/Rails app that needs to get data from a system written in Java. I have already worked out how to call Java code from Ruby.
However, lets say that the Client object I need to create, starts threads, communicates with the internal system etc, and that the data is delivered asynchronously in callbacks.
Do I have to write a separate web service (that creates this Client) with a permanent connection to the Java system? In such a way that my Ruby/Rails code can call it synchronously. Or is it possible to write this asynch handler directly in Rails?
When multiple HTTP clients issue their GET's, I would of course have to connect to the Java system when the first client arrives. For the following clients, the data would already be there.
I realize what the proper solution would be, but I'm curious whether I can do it all in Rails.
I can (for now) live without the realtime updating of the webpage, as long as the data in the Java callbacks are stored "somewhere" so that the next HTTP refresh/GET can return it. Nest step would be SSE, Javascript etc.
I think I know how to create the Java web service, however I would rather keep the solution a bit simpler with fewer services.
Thanks
Since you also have access to the java code, I have two approaches to extend the java backend, in order to provide the data you want to use in your ruby frontend application.
Use REST or a HTTP Service
Your ruby webservice could interact with the backend utilizing REST (or any other HTTP approach). This would result in cleaner and more reusable code. Since you're able to access the backend data with any client that is capable of HTML.
Use TCP together with a little protocol
In this approach the clients have to connect to a TCP socket on the backend to send data back and forth. You have to write a little byte or string based protocol which you also have to parse. It's more complex than the first approach, but also more performat and you don't have to rely on external libraries (e.g. jersey, for REST). It also has all the advantages of the former approach, you can serve any client capable of network communication and socket handling.
I apologize in advance if this is a bad question.
I'm new to backend development and I'm trying to build an instant messaging service with GAE using java servlets.
And I assume the process for sending a message will be like this:
1. Client send JSON file to servlet.
2. Servlet parses the JSON file and archives the message to the database.
So my question is:
what's going to happen if the next user attempts to send another message while the servlet is in the middle of the process of saving the previous message to the database?
Because the arrival of user requests are not synchronized with the servlet cycle, will the new request just get lost?
Is there going to be some mechanism that queues the request or it's something that I'll have to implement myself?
I think I'm really confused about how the asynchronous request between different functions in a distributed system works.
And, if there any readings that you would recommend for backend design pattern? or just a general introduction?
Thanks a lot!
Please read the official tutorial on the subject that talks in depth about the java web technologies , web containers and servlets:
http://docs.oracle.com/javaee/6/tutorial/doc/bnafd.html
But to answer your questions :
When another HTTP request comes in , a new thread will be created by
the web container and will run your servlet concurrently.
The new request will be processed concurrently
The answer depends on your specific problem , performance and SLA requirements. The simplest solution would be to parse and write each request to the database. If you are dealing with a very large number of simultaneous requests coming in , i'd suggest starting a whole new discussion on the subject.
You need to know exactly what the 'Thread' is? When another request sent to Servlet. The container like tomcat will assign another thread for this request. Every thread is independent from another.
Server requests will run in parallel and your code might access/edit the same data concurrently. You should use Datastore transactions to prevent data corruption.
No, requests are independent and they run in parallel.
You could use Task Queues in your code to make updates run sequentially, but I'd advise highly against it: first Task Queue will double your requests, second it will force a distributed parallel system to run sequentially, basically negating the whole purpose of AppEngine.
Parallel processing are essential in server programming - they enable servers to process high amount of requests. You should write code that takes this into account - use datastore transactions to prevent possible data corruption in those cases.
in a servlet lifecycle the init() and destroy() methods are called only once - but the service() will be called each time a new request comes and hit the application and a new instance of the servlet will be shared with the request through a different thread . Therefore one of the most basic rules for creating a servlet is not to create global variable in a servlet class.
Your variable is readable/writeable by any other class. You have no control to ensure that they all do sensible things with it. One of them could overwrite it/incorrectly increment it, etc
The is one instance of a servlet, per JVM. So may threads may try to access it concurrently. Because it is global, and you are not providing any synchronization/access control, it will not be thread-safe. Also, if you ever run the servlet in some kind of cluster with different JVMs, then the variable will not be shared between them and you will have multiple loginAttempt variables.
Can I use both in the same project? Or do they cause "network interference" with one another?
You can use simultaneously any Ajax mechanism in GWT, they are not exclusive, so you can select the most suitable one for each necessity in your product.
For instance, I have an application which uses RF for entities and business requests, gwt-atmosphere (which uses RPC) for comet comunications, and gwtquery-Ajax (which uses RequestBuilder) for consuming 3party json services.
It does not cause any kind of interference. In the end you are just sending HTTP request to a servlet. You client can handle (pseudo) simultaneous queries as your server.
So you can use different method to query your server. The point is more about maintenance/re-usability/readability of your code.
I would advise you to use only one way for querying your server.
IMO you should avoid using RPC as much as you can and look into more standard way of communicating with your server (RequestFactory or simply requestBuilder or even REST libraries like restyGWT) so that your server is not linked to your GWT client.