The gRPC server obtains the request header - java

I build the server of Java gRPC and want to obtain the data transmitted by the client through the request header. At present, I can only use the ServerInterceptor class to intercept the parsing request header Metadata, but I want to obtain it during service operation. What is the solution?
I tried to access and transfer data through redis, but the gRPC I wrote is multi-data source, the same client request has multiple, if different clients to me a request, but they carry different request headers, other interface names and parameters are the same, It's possible that the request header of a later request will overwrite the redis result of the previous request header, so I can't guarantee the consistency of request and request header!

Use io.grpc.Context to propagate the value to the service implementation, using Contexts.interceptCall(). The jwt-auth example does this as well as some other StackOverflow questions and answers.
Essentially, you just create a new Context with the information you want to communicate, and use Contexts.interceptCall() to make it available to the service as a ThreadLocal. If your service does processing on another thread, you need to either propagate the Context to that other thread or save the value ahead-of-time.
public class AddToContextInterceptor implements ServerInterceptor {
// Context keys use reference equality, so the consumer of the value
// must use this specific object.
public static final Context.Key<MyObject> MY_KEY = Context.key("MY_KEY");
#Override
public <ReqT,RespT> ServerCall.Listener<ReqT> interceptCall(
ServerCall<ReqT,RespT> call, Metadata headers,
ServerCallHandler<ReqT,RespT> next) {
Context newContext =
Context.current().withValue(MY_KEY, metadata.get(SOME_KEY));
return Contexts.interceptCall(newContext, call, headers, next);
}
}
// In the service:
MyObject o = AddToContextInterceptor.MY_KEY.get();

Related

Accessing StreamListener headers from RequestContext or similar

I have a service which calls a dozen other services. This reads from a Kafka topic using a #StreamListener in a controller class. For traceability purposes, the same headers(original request ID) from the Kafka message need to be forwarded to all the other services as well
Traditionally, with a #PostMapping("/path") or GetMapping, a request context is generated, and one can access the headers from anywhere using RequestContextHolder.currentRequestAttributes() and I would just pass the HttpHeaders object into a RequestEntity whenever I need to make an external call
However in a StreamListener, no request context is generated and trying to access the RequestContextHolder results in an exception
Here's an example of what I tried to do, which resulted in an exception:
public class Controller {
#Autowired Service1 service1
#Autowired Service2 service2
#StreamListener("stream")
public void processMessage(Model model) {
service1.execute(model);
service2.execute(model);
}
}
public class Service {
RestTemplate restTemplate;
public void execute(Model model){
// Do some stuff
HttpHeaders httpHeaders = RequestContextHolder.currentRequestAttributes().someCodeToGetHttpHeaders();
HttpEntity<Model> request = new HttpEntity(model, httpHeaders);
restTemplate.exchange(url, HttpMethod.POST, request, String.class);
}
}
My current workaround is to change the StreamListener to a PostMapping and have another PostMapping which calls that so a request context can be generated. Another option was to use a ThreadLocal but it seems just as janky
I'm aware of the #Headers MessageHeaders annotation to access the stream headers, however, this isn't accessible easily without passing the headers down to each and every service and would affect many unit tests
Ideally, I need a way to create my own request context (or whatever the proper terminology is) to have a place to store request scoped objects (the HttpHeader) or another thread safe way to have request headers passed down the stack without adding a request argument to service.execute
I've found a solution and am leaving it here for anyone else trying to achieve something similar
If your goal is to forward a bunch of headers end-to-end through REST controllers and Stream listeners, you might want to consider using Spring Cloud Sleuth
Add it to your project through your maven or gradle configuration:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Specifically, in Spring Cloud Sleuth there is a feature to forward headers or "baggage" by setting the property spring.sleuth.propagation-keys in your application.properties. These key-value pairs are persisted through the entire trace, including any downstream http or stream calls which also implement the same propagation keys
If these fields need to be accessed on a code level, you can get and set them using the ExtraFieldPropagation static functions:
ExtraFieldPropagation.set("country-code", "FO"); // Set
String countryCode = ExtraFieldPropagation.get("country-code"); // Get
Note that the ExtraFieldPropagation setter cannot set a property not present in the defined spring.sleuth.propagation-keys so arbitrary keys won't be accepted
You can read up on the documentation for more information

Vertx: passing additional variable between verticles

I have a service which mostly proxies another server adding some business logic to its responses. So it just receives a requests, sends similar request to another server(should be sent with specific user cookie which comes with request), applies some changes to response, and sends it back to user.
I am using vertx with rx-java, and currently I have two verticles.
One of them accepts rest requests and as a part of processing this requests calls a method on another verticle via evenBus:
router.route().handler(routingContext -> {
someService.handleRequest()
.subscribe(res -> ...send response...);
}
And somewhere in handleRequest there is a call to another verticle:
eventBus.rxSend(address, message, deliveryOptions);
Another verticle listens to this address and send request to another server:
eventBus.consumer(address)
.toObservable()
.subscribe(message -> {
response = anotherService.handleMessage(message);
message.reply(response.body());
});
And somewhere in handleMessage it calls vertx httpclient making a request to another server:
response = client.get(...);
This request should contain cookie value from initial request received in first verticle with routing context.
What is the right way to pass the cookie value from routingContext to vertx httpclient in another verticle? (I don't want to change interfaces by adding new method parameter everywhere, it should be handled implicitly)
As far as I understand the best way to pass some additional value between verticles is using headers in DeliveryOptions.
But what is the right and safe way to store it between service calls(e.g. pass it to handleRequest/handleMessage without modyfing method signature)?(possibly storing it in verticle context?)

Jersey client with null put method

I am working on a Jersey service client for one of my services and am having trouble determining the best way to pass a null entity through the client's put. On the service side of things this is my endpoint:
#PUT
#Path("/rule/disable/key/{key}")
#Produces(MediaType.APPLICATION_JSON)
public Response disableRuleByKey(#PathParam("key") String key)
throws Exception {
try {
DAL.getWriter().disableRuleByKey(key);
return Response.ok().build();
} catch (BlahException bla) {
throw de;
}
Basically all the method does in the backend is flip a toggle for other parts of the application to use. I'm not sure if put is the correct call to use here (but this was written by a teammate). I know it doesn't even have a JSON payload.
Anyways, on the client side I have this generic putItem() code for all of my clients to use via extends:
public static <T> boolean putItem(Client client, String uri, T item)
throws InterruptedException,
ExecutionException {
Invocation putConfig = client.target(uri).request()
.buildPut(Entity.entity(item, MediaType.APPLICATION_JSON));
Future<Response> asyncResponse = putConfig.submit();
Response response = asyncResponse.get();
return response.getStatus() == Status.OK.getStatusCode();
}
This PUTs into the database fine with a JSON payload, but since the method above doesn't specifically have a payload I was wondering what the best course of action would be. Would modifying the Invocation's .buildPut() to have null in it be okay since I am not passing in a payload.
I am open to modifying the endpoint too but this is what I currently have and can't figure out the best way to send this value to the backend. Should I just modify the endpoint to consume a JSON object rather than passing the key as a #PathParam?
When replacing the state of a resource with a PUT request, you should send the new representation in the request payload.
Have a look the the RFC 7231, the current reference for semantics and content in HTTP/1.1:
4.3.4. PUT
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. [...]

How to support batch web api request processing using Spring/Servlets

We have our Web API written in using RESTEasy. We would like to provide support for Batch requests processing the way Google Batch request processing works.
Following is the approach which are using currently,
We have a filter which accepts incoming multipart request. This filter then creates multiple mock requests and response objects and then calls chain.doFilter using these mock requests.
public class BatchRequestProcessingFilter extends GenericFilterBean {
#Override
public void doFilter(ServletRequest req, ServletResponse res,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest request = (HttpServletRequest)req;
MockHttpServletRequest[] mockRequests = BatchRequestProcessorUtils.parseRequest(request);
MockHttpServletResponse[] mockResponses = new MockHttpServletResponse[mockRequests.length];
for(int i=0 ; i <= mockRequests.length ; i++ ) {
chain.doFilter(mockRequests[i], mockResponses[i], chain);
}
BatchRequestProcessingUtils.populateResponseFromMockResponses(res, mockResponses);
}
}
MockHttpServletResponse class returns a dummy OutputStream which wraps ByteArrayOutputStream.
BatchRequestProcessorUtils parses the multipart request and returns the mock request which wraps actual request but returns the header specified in split body of the actual request body.
I could not find any existing library which supports batch request processing. So my question is that, is this the correct approach to support batch request or is there any standard way which should be used?
Note that we are using Tomcat 8.
Sachin Gorade. I have not heard about such libraries, but I think your approach is reasonable. If I had to solve that problem, I would think like this:
In our HTTP servlets we can process requests only separately, and it is the reason why we should wrap all requests, that we want to send, into another single request at client side.
As on our server side we have only one request, then we should unwrap all requests we have put into it. And, because we dont know how to process each request in our batch mechanizm - we shold send it through all filters/servlets. Also it is a reason to put our batch filter at the first position in the order.
Eventually, when all requests has been processed, we should send a response back to the client. And again, to do that we should wrap all responses into a single one.
At the client side we should unwrap responses and send each of that to some objects, that can process it.
In my oponion there should be two mechanizms:
Batch sender for client side, that is responsible for collecting and wrapping requests, unwrapping responses and sending them to theirs processors(methods that process regular responses).
Batch processor for server side, that is responsible for unwrapping requests, and collecting and wrapping responses.
Of course, that two parts may be coupled (i.g. to have shared "Wrapper" module), because objects we must be wrapped and unwrapped in the same way.
Also, if I worked on it, I would try to develop the client side mechanizm like a decorator upon a class that I use to send regular requests. In that case, I would be able to substitute regular/batch mode anytime I need to do it.
Hope my opinion is helpful for you.

How can i preserve the request headers in error response prepared through JAX-RS ( Apache-CXF implementation) ExceptionMapper

I am implementing JAX-RS using apache CXF. I have created an ExceptionMapper to handle bad requests like this:
public class ClientErrorExceptionMapper implements ExceptionMapper<ClientErrorException> {
#Override
public Response toResponse(final ClientErrorException exception) {
return Response.status(Response.Status.BAD_REQUEST).entity("Invalid request: Invalid URI.").build();
}
}
I am not sure how this works internally but i suppose that framework would throw an exception in case user is making an invalid request and this handler will prepare an error message to be send back. My problem is that i wish to preserve some custom headers that user sends in the request, so that i send that back with the response. But using this exception mapper, i cant see any option to get the original request headers. I can set any new header in the response, but i wish to preserve the request headers - like i do in a normal request.
So is there any way in JAX-RS where i can preserve or efficiently refer to the custom headers in current request ?
What we have resorted to is using a thread local variable to save the RequestContext when the request arrives and then in the ExceptionMapper we can obtain request specific information.
Ugly but it works. I think we have a generic filter in the filter list that catches all requests before dispatch.

Categories