I'm using quarkus version 2.3.0.Final.
I have a rest endpoint in the Controller layer:
#POST
#Path("/upload")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Uni<Response> uploadFile(#MultipartForm FormData formData) {
return documentService.uploadFile(formData).onItem()
.transform(value -> Response.status(200)
.entity(value)
.build());
}
and in the service layer the code
public Uni<?> uploadFile(#NonNull FormData formData) throws Exception {
// Call to graphQl client using blocking process - the problem occurs here,
RevisionResponse revisionResponse = entityRepository.createRevision(formData);
// Do upload to s3 using s3 Async
return Uni.createFrom()
.future(
storageProviderFactory.getDefaultStorageProvider().upload(formData)))
.map(storageUploadResponse -> DocumentResponse.builder()
.id(revisionResponse.getId())
.entityMasterId(revisionResponse.getEntityMasterId())
.type(revisionResponse.getType())
.path(formData.getFilePath())
.description(formData.getDescription())
.build());
}
and here are the dependencies that I used:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-graphql-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-reactive</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-reactive-jsonb</artifactId>
</dependency>
When I run this function, it's being blocked in entityRepository.createRevision(formData) (The console show the graphql request log but actually, the request even does not hit the target graphql endpoint)
However, if I add the annotation #Blocking in the controller layer, everything works as expected.
I also tried with Uni response for Uni<RevisionResponse> revisionResponse = entityRepository.createRevision(formData); but the same error happens.
Does anyone face these issues, did I config something wrong for the non-blocking processed?
Thank you.
For those who face with the same issue to me, I fix it by wrapping the blocking code with Uni:
Uni<RevisionResponse> revisionResponse = Uni.createForm().item(entityRepository.createRevision(formData));
Ref link: https://smallrye.io/smallrye-mutiny/guides/imperative-to-reactive#running-blocking-code-on-subscription
Because you are returning Uni from your method, RESTEasy Reactive is running the method on the event loop (see this for details).
However, it looks like the call to entityRepository.createRevision is blocking IO, which means that the event loop thread is being blocked - something which is not allowed to happen.
Using the #Blocking annotation means that the request is being serviced on a worker pool thread, on which you are allowed to block.
Related
I tried this one https://github.com/zalando/logbook but it only works in spring based application. Does someone knows how to log request and response in Vertx framework?
This is a bit of a broad question, but let me try to answer it in a couple of segments.
First, I assume you are looking for a logger library. Vertx provides something like that, but it is deprecated and they encourage you to use third-party libs like Log4J or SLF4J. To use it, you need to add it as a dependency to your pom.xml like this (assuming you use maven):
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.32</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.32</version>
</dependency>
After that, you can instantiate a logger like this:
final static Logger logger = LoggerFactory.getLogger("loggerName");
and use it like this:
logger.info("Logging this message!")
As to where to log HTTP request, you have to go where you handle your HTTP routes and define and register a handler. For response, that is something you send so you easily log (with logger) in the place in the code where you create it. This is how you would handle HTTP request logging:
final Handler<RoutingContext> loggingHandler = routingContext -> {
// here you access properties of routingContext.request() and log what you want
}
router.route().handler(loggingHandler);
As for response, somewhere in your code you create response like this: HttpServerResponse response = context.response().setStatusCode(status); and then send it with response.end(content). Before calling .end() you can log what you need by accessing properties of response.
Add the following handler to your router and log to your needs
private fun loggingHandler(routingContext: RoutingContext) {
routingContext.addHeadersEndHandler {
// log context.request() and context.response() as required
}
routingContext.next()
}
I am confused about how an infinite loop of feign calls might behave.
An example:
Assume I have 2 APIs, A & B.
if I call API A, which in turn calls API B via a feign HTTP call, which in turn calls API A again via feign, will it recognize this and break the call chain?
Quick flowchart of calls:
A -> B -> A -> B ... Repeat infinitely?
I have not tried this code, it is just an idea。
But I am assuming that spring-cloud-starter-feign will provide some methods to resolve this problem? Is this assumption correct?
#PostMapping(RestJsonPath.API_A)
ResponseEntity<byte[]> apiA();
#PostMapping(RestJsonPath.API_B)
ResponseEntity<byte[]> apiB();
Will it execute until it times out or hystrix will stop it?
TL;DR:
Feign will keep the connection open on the initial request from A to B until the pre-configured timeout kicks in. At this point, Feign will time out the request and if you have specified a Hystrix fallback, Spring will use your Hystrix fallback as the response.
Explanation:
spring-boot-starter-feign provides an abstraction layer for writing the HTTP request code. It will not handle potential loops or cycles in your code.
Here is an example spring boot feign client from their tutorials website for demonstration:
#FeignClient(value = "jplaceholder",
url = "https://jsonplaceholder.typicode.com/",
configuration = ClientConfiguration.class,
fallback = JSONPlaceHolderFallback.class)
public interface JSONPlaceHolderClient {
#RequestMapping(method = RequestMethod.GET, value = "/posts")
List<Post> getPosts();
#RequestMapping(method = RequestMethod.GET, value = "/posts/{postId}", produces = "application/json")
Post getPostById(#PathVariable("postId") Long postId);
}
Notice first that this is an interface - all the code is auto generated by Spring at startup time, and that code will make RESTful requests to the urls configured via the annotations. For instance, the 2nd request allows us to pass in a path variable, which Spring will ensure makes it on the URL path of the outbound request.
The important thing to stress here is that this interface is only responsible for the HTTP calls, not any potential loops. Logic using this interface (which I can inject to any other Spring Bean as I would any other Spring Bean), is up to you the developer.
Github repo where this example came from.
Spring Boot Docs on spring-boot-starter-openfeign.
Hope this helps you understand the purpose of the openfeign project, and helps you understand that it's up to you to deal with cycles and infinite loops in your application code.
As for Hystrix, that framework comes in to play (if it is enabled) only if one of these generated HTTP requests fails, whether it's a timeout, 4xx error, 5xx error, or a response deserialization error. You configure Hystrix, as a sensible default or fallback for when the HTTP request fails.
This is a decent tutorial on Hystrix.
Some points to call out is that a Hystrix fallback must implement your Feign client interface, and you must specify this class as your Hysterix fallback in the #FeignClient annotation. Spring and Hystrix will call your Hystrix class automatically if a Feign request fails.
I have a service which calls a dozen other services. This reads from a Kafka topic using a #StreamListener in a controller class. For traceability purposes, the same headers(original request ID) from the Kafka message need to be forwarded to all the other services as well
Traditionally, with a #PostMapping("/path") or GetMapping, a request context is generated, and one can access the headers from anywhere using RequestContextHolder.currentRequestAttributes() and I would just pass the HttpHeaders object into a RequestEntity whenever I need to make an external call
However in a StreamListener, no request context is generated and trying to access the RequestContextHolder results in an exception
Here's an example of what I tried to do, which resulted in an exception:
public class Controller {
#Autowired Service1 service1
#Autowired Service2 service2
#StreamListener("stream")
public void processMessage(Model model) {
service1.execute(model);
service2.execute(model);
}
}
public class Service {
RestTemplate restTemplate;
public void execute(Model model){
// Do some stuff
HttpHeaders httpHeaders = RequestContextHolder.currentRequestAttributes().someCodeToGetHttpHeaders();
HttpEntity<Model> request = new HttpEntity(model, httpHeaders);
restTemplate.exchange(url, HttpMethod.POST, request, String.class);
}
}
My current workaround is to change the StreamListener to a PostMapping and have another PostMapping which calls that so a request context can be generated. Another option was to use a ThreadLocal but it seems just as janky
I'm aware of the #Headers MessageHeaders annotation to access the stream headers, however, this isn't accessible easily without passing the headers down to each and every service and would affect many unit tests
Ideally, I need a way to create my own request context (or whatever the proper terminology is) to have a place to store request scoped objects (the HttpHeader) or another thread safe way to have request headers passed down the stack without adding a request argument to service.execute
I've found a solution and am leaving it here for anyone else trying to achieve something similar
If your goal is to forward a bunch of headers end-to-end through REST controllers and Stream listeners, you might want to consider using Spring Cloud Sleuth
Add it to your project through your maven or gradle configuration:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Specifically, in Spring Cloud Sleuth there is a feature to forward headers or "baggage" by setting the property spring.sleuth.propagation-keys in your application.properties. These key-value pairs are persisted through the entire trace, including any downstream http or stream calls which also implement the same propagation keys
If these fields need to be accessed on a code level, you can get and set them using the ExtraFieldPropagation static functions:
ExtraFieldPropagation.set("country-code", "FO"); // Set
String countryCode = ExtraFieldPropagation.get("country-code"); // Get
Note that the ExtraFieldPropagation setter cannot set a property not present in the defined spring.sleuth.propagation-keys so arbitrary keys won't be accepted
You can read up on the documentation for more information
I am trying to start a mock a server from Java and keep it running to receive incoming requests from other sources (Postman, CURL, etc).
I have tried the Junit approach, but when the unit tests finishes, the server is shutdown.
On the other hand, running the standalone version
http://www.mock-server.com/mock_server/running_mock_server.html#running_from_command_line
keeps the mock server running.
I would like to achieve the same thing, but from the Java code.
The question is, how may I make it run and stay running?
Thanks
So you need an HTTP server for non-testing purposes? I'd try with Spring, something like:
#RestController
public class CatchAllController {
#RequestMapping("/**")
public String catchAll(HttpServletRequest request) {
return request.getRequestURI();
}
}
There is an example on that same page (paragraph "Client API - starting and stopping"). This code works for me:
import static org.mockserver.integration.ClientAndServer.startClientAndServer;
import org.mockserver.integration.ClientAndServer;
public class Checker {
public static void main(String[] args) {
ClientAndServer mockServer = startClientAndServer(1080);
}
}
You have to call
mockServer.stop();
later to stop it.
You will need the following maven dependency:
<!-- mockserver -->
<dependency>
<groupId>org.mock-server</groupId>
<artifactId>mockserver-netty</artifactId>
<version>5.5.1</version>
</dependency>
I am developing a REST API in Spring Boot which I am providing the response within mostly 1-3 sec.My Controller is like below:
#RestController
public class ApiController {
List<ApiObject> apiDataList;
#RequestMapping(value="/data",produces={MediaType.APPLICATION_JSON_VALUE},method=RequestMethod.GET)
public ResponseEntity<List<ApiObject>> getData(){
List<ApiObject> apiDataList=getApiData();
return new ResponseEntity<List<ApiObject>>(apiDataList,HttpStatus.OK);
}
#ResponseBody
public List<ApiObject> getApiData(){
List<ApiObject> apiDataList3=new List<ApiObject> ();
//do the processing
return apiDataList3;
}
}
So I have a 300 users concurrently loading the API.I performed the load test with JMeter and it was ok.But still there were some failures(not all API calls were served).So how do I overcome this?How to imlement any queue on the API calls which arrives or any other methods to ensure each API call is responded with data?
Do you mean that you would like to make sure all the requests return the data?! If yes, you can use #Async and get the CompletableFuture. Then in your Controller, you can use the CompletableFuture to get the response. In case there are some failure, you can set the timeout for that and catch the exception to log the error.
Hope this help.