We've to implement batch request for odata in java.I'm new to odata,from the below 2 following references,which one has to be followed.Do we've to construct a batch request or will it be done using odata batch api's?Can anyone please help on how to proceed with the implementation?
https://olingo.apache.org/doc/odata4/tutorials/batch/tutorial_batch.html
https://olingo.apache.org/doc/odata4/tutorials/od4_basic_batch_client.html
The batch request will be created automatically by the OData Client.
TLDR;
A batch request is a REST call to a special endpoint $batch, with a well-defined payload type.
The payload consists of batch requests and subtype of chagesets. Both of them are used to club multiple requests into one except the requests in one changeset is expected to be atomic. So, either all the requests execute or in case one or more fails there should be a rollback (or similar) to prevent the others from persisting
https://olingo.apache.org/doc/odata4/tutorials/od4_basic_batch_client.html
This link has the example for creating the client, Then create an entity and set some properties, put it in change set and execute. In the background it will send a batch request as per the OData $batch format as documented in
https://olingo.apache.org/doc/odata4/tutorials/batch/tutorial_batch.html
Related
GET http://localhost/foo/api/v1/bars/:id
How to have different JSON responses registered for a GET call. We would like the GET call to return a separate response based on whether a CLI is invoking or the user interface is calling the API by passing a query parameter. But how do we register different serializers dynamically on the response.
You can use a User-Agent request header to identify the application doing the request. There are good tutorials to check how to access the headers in Spring, like this Baeldung one.
I'm implementing a GET method in Quarkus that should send large amounts of data to the client. The data is read from the database using JPA/Hibernate, serialized to JSON, and then sent to the client. How can this can be done efficiently without having the whole data in memory? I tried the following three possibilities all without success:
Use getResultList from JPA and return a Response with the list as the body. A MessageBodyWriter will take care of serializing the list to JSON. However, this will pull all data into memory which is not feasible for a larger number of records.
Use getResultStream from JPA and return a Response with the stream as the body. A MessageBodyWriter will take care of serializing the stream to JSON. Unfortunately this doesn't work because it seems the EntityManager is closed after the JAX-RS method has been executed and before the MessageBodyWriter is invoked. This means that the underlying ResultSet is also closed and the writer cannot read from the stream any more.
Use a StreamingOutput as Response body. The same problem as in 2. occurs.
So my question is: what's the trick for sending large data read via JPA with Quarkus?
Do your results have to be all in one response? How about making the client request the next results page until there's no next - a typical REST API pagination exercise? Also the JPA backend will only fetch that page from the database so there's no moment when everything would sit in memory.
Based on your requirement you have two options:
Option 1:
Take HATEOAS approach (https://restfulapi.net/hateoas/). One of standard pattern to exchange large data sets over REST standard. So in this approach server will respond with set of HATEOAS URIs in first response quickly. Where each HATEOAS URI represents on group of elements. So you need to generate these URIs based on data size and let client code to take responsibility of calling these URIs individually as REST APIs to get actual data. But again in this option also you can consider Reactive style to get more advantage of streaming processing with small memory foot print.
Option 2:
As suggested by #Serkan above, continuously stream the result set from database as REST response to client. Here you need to make sure the gateway between client and Service for timeout settings. If there is no gateway you are good. So you can take advantage of reactive programming at all layers to achieve continuous streaming. "DAO/data access layer" --> "Service layer" --> REST Controller --> Client. Spring reactor is compliant of JAX-RS as well. https://quarkus.io/guides/getting-started-reactive. This is best architecture style while dealing large data processing.
Here you have some resources that can help you with this:
Using reactive Hibernate: https://quarkusio.zulipchat.com/#narrow/stream/187030-users/topic/Large.20datasets.20using.20reactive.20SQL.20clients
Paging vs Forward only ResultSets: https://knes1.github.io/blog/2015/2015-10-19-streaming-mysql-results-using-java8-streams-and-spring-data.html
The last article is for SpringBoot, but the idea can also be implemented with Quarkus.
------------Edit:
OK, I've worked out an example where I do a batch select. I did it with Panache, but you can do it easily also without it.
I'm returning a ScrollableResult, then use this in the Rest resource to stream it via SSE (server sent event) to the client.
------------Edit 2:
I've added the setFetchSize to the query. You should play with this number and set it between 1-50. If value = 1, then the db rows will be fetched 1 by 1, this mimics streaming the most. And it will use the least amount of memory, but the I/O between the db & app will be more often.
And the usage of a StatelessSession is highly recommended when doing bulk operations like this.
#Entity
public class Fruit extends PanacheEntity {
public String name;
// I've removed the logic from here to the Rest resource,
// otherwise you cannot close the session
}
#Path("/fruits")
public class FruitResource {
#GET
#Produces(SERVER_SENT_EVENTS)
public void fruitsStream(#Context Sse sse, #Context SseEventSink sink) {
var sf = Fruit.getEntityManager().getEntityManagerFactory().unwrap(SessionFactory.class);
try (var session = sf.openStatelessSession();
var scrollableResults = session.createQuery("select f from Fruit f")
.scroll(ScrollMode.FORWARD_ONLY)
.setFetchSize(1) {
while (scrollableResults.next()) {
sink.send(sse.newEventBuilder().data(scrollableResults.get(0)).mediaType(APPLICATION_JSON_TYPE).build());
}
sink.close();
}
}
}
Then I call this Rest endpoint like this (via httpie):
> http :8080/fruits --stream
data: {"id":9996,"name":"applecfcdd592-1934-4f0e-a6a8-2f88fae5d14c"}
data: {"id":9997,"name":"apple7f5045a8-03bd-4bf5-9809-03b22069d9f3"}
data: {"id":9998,"name":"apple0982b65a-bc74-408f-a6e7-a165ec3250a1"}
data: {"id":9999,"name":"apple2f347c25-d0a1-46b7-bcb6-1f1fd5098402"}
data: {"id":10000,"name":"apple65d456b8-fb04-41da-bf07-73c962930629"}
Hope this helps you.
I'm trying to invoke a AWS Lambda function asynchronous from the AWS API Gateway.
I have a long running (2-3min) Lambda function and I want to invoke this Lambda function asynchronous from a HTTP Post request. I configured the API Gateway as a Lambda Proxy Integration (because I want to pass the body unmodified to the function) This is working fine, but after 30s I get a 504 due the API Gateway execution time restriction.
But I can't manage to call the function async. According to the AWS docs it should be possible if I set the haeder "X-Amz-Invocation-Type", but this doesn't make any difference.
Does anybody know if it is possible to invoke a function async and using the proxy integration?
AWS says it's possible if you set the X-Amz-Invocation-Type header to Event, but I ran into the same necessity a few months ago and this did not work for me, so I am not sure this is still the case or if it was just me who misconfigured it. Maybe you are missing the same thing as me back then: I did not add an InvocationType header on the Integration Request as the docs suggests, so this very likely is the case for you, but still, I can't guarantee it works)
The documentation says:
Configure Lambda asynchronous invocation in the API Gateway console
In Integration Request, add an X-Amz-Invocation-Type header.
In Method Request, add an InvocationType header and map it to the
X-Amz-Invocation-Type header in the Integration Request with either a
static value of 'Event' or the header mapping expression of
method.request.header.InvocationType. For the latter, the client must
include the InvocationType:Event header when making a request to the
API method.
If this works, then you are good to go.
What I did back then, however, was to create an intermediate Lambda which literally acted as proxy to the actual Lambda.
There are a wide range of options to execute your function asynchronously, but you will need two Lambda functions regardless.
One option is to invoke another function (which will actually execute the task you want) asynchronously via the function invoked by API Gateway.
const params = {
FunctionName: 'YOUR_FUNCTIONS_NAME',
InvocationType: 'Event',
Payload: JSON.parse(event.body) // this is the event coming from API Gateway
};
await lambda.invoke(params).promise(); // await here is only going to wait for the HTTP request to be successful. Once the 2nd Lambda is invoked, it will return immediately
Another option is to put a message in SQS and configure a trigger for your Lambda to be invoked when there's a new message in the SQS queue. Same thing applies for a SNS notification.
Other options include Kinesis, DynamoDB Streams, etc. but the idea is the same: the function invoked via API Gateway must be nothing but a proxy to the other Lambda. How this proxy is going to work (be it sending a message to SQS, SNS, invoking the other function asynchronously directly, etc.) does not matter, what matters is the concept to get around API Gateway's 30 seconds request limit.
I'm facing a particular use case while using Wiremock standalone API.
I would like to be able to reuse a response body generated by stubbing for a another request (stubbed as well) as a context model. The purpose is to store for a generated Id the entire response data, that would allow me to serve it again simply knowing the Id, in a get method particularly (where there is no request body).
Is there a way while defining a stub of response to capture the generated response, in order to store it?
Or if you have other better idea.
Finally I solved the problem by using an okhttp interceptor (which depends on your client solution).
In the interceptor, I store every response data (e.g.: a generated ID) and set them in every next request headers when it matches with part of the the response stored.
adding them to the request headers allows me to access them in a json template file for instance
I'm fairly new to Jersey JAX-RS, so please bear with me. We're trying to add batch processing capabilities to our REST API by having the client submit a JSON list of uri paths that it would typically have made individually. For example:
[{"/rest/shoes/1"} , {"/rest/shirts/24"} , {"/rest/costume?color=green"}]
Again, each string in this list would be paths (or subpaths) in the REST api.
This list would be submitted to a single path, say "/rest/queries", which would correspond to a method public List<Response> queries(List<String>) . The idea is to execute the corresponding methods for each path in the list given. Is there a way to do that on Jersey 1.0 JAX-RS? Or, alternatively, is there a way to configure JAX-RS that it would just automatically do batch GET requests?
Our goal is to have the batch request that contain several GET requests that can be entirely different from each other, similar to my example above.