How to map all interaction models of RSocket in Spring Boot - java

There are 4 interaction models provided in RSocket.
fire and forget
request and response
request stream
request channel
(metadata push)
Spring(and Spring Boot) provides RSocket integration, it is easy to build a RSocket server with the existing messaging infrastructure to hide the original RSocket APIs.
#MessageMapping("hello")
public Mono<Void> hello(Greeting p) {
log.info("received: {} at {}", p, Instant.now());
return Mono.empty();
}
#MessageMapping("greet.{name}")
public Mono<String> greet(#DestinationVariable String name, #Payload Greeting p) {
log.info("received: {}, {} at {}", name, p, Instant.now());
return Mono.just("Hello " + name + ", " + p.getMessage() + " at " + Instant.now());
}
#MessageMapping("greet-stream")
public Flux<String> greetStream(#Payload Greeting p) {
log.info("received: {} at {}", p, Instant.now());
return Flux.interval(Duration.ofSeconds(1))
.map(i -> "Hello #" + i + "," + p.getMessage() + " at " + Instant.now());
}
And in the client side, there is a RescoketRequester provided to shake hands with the server.
#GetMapping("hello")
Mono<Void> hello() {
return this.requester.route("hello").data(new Greeting("Welcome to Rsocket")).send();
}
#GetMapping("name/{name}")
Mono<String> greet(#PathVariable String name) {
return this.requester.route("greet." + name).data(new Greeting("Welcome to Rsocket")).retrieveMono(String.class);
}
#GetMapping(value = "stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> greetStream() {
return this.requester.route("greet-stream").data(new Greeting("Welcome to Rsocket"))
.retrieveFlux(String.class)
.doOnNext(msg -> log.info("received messages::" + msg));
}
But how to use requestChannel and metadataPush model in Spring way(using messaging infrastructure)?
The sample codes is on Github. Update: added requestChannel sample.
Update: SETUP and METADATA_PUSH can be handled by #ConnectMapping. And Spring Security RSocket can secure SETUP and REQUEST.

Reference example
For a reference example, let's refer to the client-to-server integration tests and, in particular, to the ServerController class: spring-framework/RSocketClientToServerIntegrationTests.java (line 200) at 6d7bf8050fe710c5253e6032233021d5e025e1d5 · spring-projects/spring-framework · GitHub.
This commit has been mentioned in the release notes:
<…>
RSocket support including response handling via annotated #MessageMapping methods and performing requests via RSocketRequester.
<…>
— Spring Framework 5.2.0.M1 available now.
Channel interaction model
The corresponding code part of the reference example:
#MessageMapping("echo-channel")
Flux<String> echoChannel(Flux<String> payloads) {
return payloads.delayElements(Duration.ofMillis(10)).map(payload -> payload + " async");
}
Metadata push
It seems that, currently, it is not supported by the #MessageMapping annotation.

Related

Spring Cloud Gateway log 404

I need to log incoming requests. So I've added this class to log incoming requests:
#Slf4j
#Component
public class LoggingFilter implements GlobalFilter {
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
Set<URI> uris = exchange.getAttributeOrDefault(GATEWAY_ORIGINAL_REQUEST_URL_ATTR, Collections.emptySet());
String originalUri = (uris.isEmpty()) ? "Unknown" : uris.iterator().next().toString();
Route route = exchange.getAttribute(GATEWAY_ROUTE_ATTR);
URI routeUri = exchange.getAttribute(GATEWAY_REQUEST_URL_ATTR);
log.info("Incoming request " + originalUri + " is routed to id: " + route.getId()
+ ", uri:" + routeUri);
return chain.filter(exchange);
}
}
It works perfect with successful request. But when route not found there is no message in log.
Any ideas how I can do that? Thanks!

How to put an object into S3 using Webflux asynchronously?

An article, AWS S3 with Java – Reactive, describes how to use the AWS SDK 2.0 client with Webflux.
In the example, they use the following handler to upload to S3 then return a HTTP Created response:
#PostMapping
public Mono<ResponseEntity<UploadResult>> uploadHandler(#RequestHeader HttpHeaders headers,
#RequestBody Flux<ByteBuffer> body) {
long length = headers.getContentLength();
String fileKey = UUID.randomUUID().toString();
Map<String, String> metadata = new HashMap<String, String>();
CompletableFuture future = s3client
.putObject(PutObjectRequest.builder()
.bucket(s3config.getBucket())
.contentLength(length)
.key(fileKey.toString())
.contentType(MediaType.APPLICATION_OCTET_STREAM.toString())
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body));
return Mono.fromFuture(future)
.map((response) -> {
checkResult(response);
return ResponseEntity
.status(HttpStatus.CREATED)
.body(new UploadResult(HttpStatus.CREATED, new String[] {fileKey}));
});
}
This works as intended. Trying to learn WebFlux, I expected that the following would complete the HTTP upload to S3 asynchronously in the same thread that the subscribe method is called on:
#PostMapping
public Mono<ResponseEntity<UploadResult>> uploadHandler(#RequestHeader HttpHeaders headers, #RequestBody Flux<ByteBuffer> body) {
long length = headers.getContentLength();
String fileKey = UUID.randomUUID().toString();
Map<String, String> metadata = new HashMap<String, String>();
Mono<PutObjectResponse> putObjectResponseMono = Mono.fromFuture(s3client
.putObject(PutObjectRequest.builder()
.bucket(s3config.getBucket())
.contentLength(length)
.key(fileKey.toString())
.contentType(MediaType.APPLICATION_OCTET_STREAM.toString())
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body)));
putObjectResponseMono
.doOnError((e) -> {
log.error("Error putting object to S3 " + Thread.currentThread().getName(), e);
})
.subscribe((response) -> {
log.info("Response from S3: " + response.toString() + "on " + Thread.currentThread().getName());
});
return Mono.just(ResponseEntity
.status(HttpStatus.CREATED)
.body(new UploadResult(HttpStatus.CREATED, new String[]{fileKey})));
}
The HTTP POST completes as expected, but the S3 put request fails with this log message:
2020-06-10 12:31:22.275 ERROR 800 --- [tyEventLoop-0-4] c.b.aws.reactive.s3.UploadResource : Error happened on aws-java-sdk-NettyEventLoop-0-4
software.amazon.awssdk.core.exception.SdkClientException: 400 BAD_REQUEST "Request body is missing: public reactor.core.publisher.Mono<org.springframework.http.ResponseEntity<com.baeldung.aws.reactive.s3.UploadResult>> com.baeldung.aws.reactive.s3.UploadResource.uploadHandler(org.springframework.http.HttpHeaders,reactor.core.publisher.Flux<java.nio.ByteBuffer>)"
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.util.ThrowableUtils.asSdkException(ThrowableUtils.java:98) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryIfNeeded(AsyncRetryableStage.java:125) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:107) ~[sdk-core-2.10.27.jar:na]
........
I suspect the explanation involves the request to S3 being run on its own thread, but I'm stumped working out what is going wrong, can you shed any light on it?
try this
#RequestBody Flux<ByteBuffer> body
>>> replace #RequestBody byte[]
and
AsyncRequestBody.fromPublisher(body)
>>> replace .fromBytes(body)
and if you want to subscribe from another thread, use: .subscribeOn({Schedulers})

Spring Integration Java DSL with a defined IntegrationFlow - missing data in response and mismatched correlationIds

I am using Spring Integration Java DSL with a defined IntegrationFlow. I am seeing behavior where the response is missing pieces of data and the correlationId in the aggregator response does not match the value in the response that is received by calling service.
Background:
I have a JMeter performance test running on a server that uses random data and is running at 600 requests per minute. On my laptop, I have a SoapUI performance test running that hits the same server. The SoapUI project sends requests with the same search criteria (we are doing matching) at a rate of 60 requests per minute. The responses should all contain the same result data.
Approximately 0.5% of the time the response is returned with data missing. In these responses, the correlationId of the response that is logged from the aggregator and the correlationId of the response logged from the calling service (logged after the response is returned to the calling service and has already passed through the aggregator) do not match.
Any idea what is wrong? Please see code snippets below.
#Configuration
#EnableAutoConfiguration
#Import(.....AServiceConfig.class)
public class ServiceConfig {
#Bean(name = "inputChannel")
public DirectChannel inputChannel() {
return new DirectChannel();
}
#Bean(name = "outputChannel")
public QueueChannel outputChannel() {
return new QueueChannel();
}
#Bean(name = "transactionLogger")
public ourLogger ourTransactionLogger() {
return OurLoggerFactory.getLogger("ourAppTrx", new ourLoggerConfig(ourTransactionLoggerKey.values()));
}
public IntegrationFlow ourFlow() {
return IntegrationFlows.from(inputChannel())
.split(splitter(ourTransactionLogger()))
.channel(MessageChannels.executor(getExecutor()))
.handle(ourServiceActivator, "service")
.aggregate(t -> t.processor(ourAggregator, AGGREGATE))
.channel(outputChannel())
.get();
}
#Bean(name = "executor")
public Executor getExecutor()
{
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newCachedThreadPool();
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
return executor;
}
}
//snippet from calling service
public InquiryResponse inquire(InquiryRequest request) {
inputChannel.send(MessageBuilder.withPayload(request).build());
Message<?> msgResponse = outputChannel.receive();
InquiryResponse response = (InquiryResponse) msgResponse.getPayload();
TransactionLogger.debug("correlationId + msgResponse.getHeaders().get("correlationId"));
TransactionLogger.debug("InquiryService inquire response = " + response.toString());
return response;
}
//snippet from aggregator
#Aggregator
public <T> InquiryResponse aggregate(List<Message> serviceResponses) {
InquiryResponse response = new InquiryResponse();
serviceResponses.forEach(serviceResponse -> {
Object payload = serviceResponse.getPayload();
if (payload instanceof AMatchResponse) {
response.setA(((AMatchResponse) payload).getA());
} else if (payload instanceof BValueResponse) {
response.setB(((BValueResponse) payload).getB());
} else if (payload instanceof BError) {
response.setB(new B().addBErrorsItem((BError) payload));
} else if (payload instanceof AError) {
response.setA(new A().AError((AError) payload));
} else {
transactionLogger.warn("Unknown message type received. This message will not be aggregated into the response. ||| model=" + payload.getClass().getName());
}
});
transactionLogger.debug("OurAggregator.response = " + response.toString());
return response;
}

Spring Integration application with external Web Services monitoring

Currently I have an application with Spring Integration DSL that has AMQP inbound gateway with different service activators, each service activator has kind of logic to decide, transform and call external web services (currently with CXF), but all this logic is in code without Spring Integration components.
These service activators are monitored, in the output channel that returns data from this application is an AMQP adapter that sends headers to a queue (after that, all headers are processed and saved in a database for future analysis). This works well, these service activators even have elapsed time in headers.
Now, the problem is, that I need to monitor external web service calls, like elapsed time in each operation, which service endpoint and operation was called, if an error occurred.
I've been thinking that logic code in each service activator should be converted into a Spring Integration flow, in each service activator, would call a new gateway with the name of the operation of the web service in a header, and monitoring every flow as currently I had been doing.
So, I'm not sure if this manual approach is the better approach, so I wonder if there is a way to get the name of the service operation with some kind of interceptor or something similar with CXF or Spring WS to avoid setting the name of the operation in headers in a manual way? What would be your recommendation?
To have more context here is the Spring Integration configuration:
#Bean
public IntegrationFlow inboundFlow() {
return IntegrationFlows.from(Amqp.inboundGateway(simpleMessageListenerContainer())
.mappedReplyHeaders(AMQPConstants.AMQP_CUSTOM_HEADER_FIELD_NAME_MATCH_PATTERN)
.mappedRequestHeaders(AMQPConstants.AMQP_CUSTOM_HEADER_FIELD_NAME_MATCH_PATTERN)
.errorChannel(gatewayErrorChannel())
.requestChannel(gatewayRequestChannel())
.replyChannel(gatewayResponseChannel())
)
.enrichHeaders(new Consumer<HeaderEnricherSpec>() {
#Override
public void accept(HeaderEnricherSpec t) {
t.headerExpression(AMQPConstants.START_TIMESTAMP, "T(java.lang.System).currentTimeMillis()");
}
})
.transform(getCustomFromJsonTransformer())
.route(new HeaderValueRouter(AMQPConstants.OPERATION_ROUTING_KEY))
.get();
}
#Bean
public MessageChannel gatewayRequestChannel() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public MessageChannel gatewayResponseChannel() {
return MessageChannels.publishSubscribe().get();
}
private IntegrationFlow loggerOutboundFlowTemplate(MessageChannel fromMessageChannel) {
return IntegrationFlows.from(fromMessageChannel)
.handle(Amqp.outboundAdapter(new RabbitTemplate(getConnectionFactory()))
.exchangeName(LOGGER_EXCHANGE_NAME)
.routingKey(LOGGER_EXCHANGE_ROUTING_KEY)
.mappedRequestHeaders("*"))
.get();
}
And here is a typical service activator, as you can see, all this logic could be an integration flow:
#ServiceActivator(inputChannel="myServiceActivator", outputChannel = ConfigurationBase.MAP_RESPONSE_CHANNEL_NAME)
public Message<Map<String, Object>> myServiceActivator(Map<String, Object> input, #Header(AMQPConstants.SESSION) UserSession session) throws MyException {
Message<Map<String, Object>> result = null;
Map<String, Object> mapReturn = null;
ExternalService port = serviceConnection.getExternalService();
try {
if (input.containsKey(MappingConstants.TYPE)) {
Request request = transformer
.transformRequest(input, session);
Response response = port
.getSomething(request);
utils.processBackendCommonErrors(response.getCode(), response.getResponse());
mapReturn = transformer.convertToMap(response);
} else {
Request request = transformer
.transformRequest(input, session);
Response response = port
.getSomethingElse(request);
utils.processBackendCommonErrors(response.getCode(),
response.getResponse());
mapReturn = transformer.convertToMap(response);
}
} catch (RuntimeException e) {
String message = "unexcepted exception from the back-end";
logger.warn(message, e);
throw MyException.generateTechnicalException(message, null, e);
}
result = MessageBuilder.withPayload(mapReturn)
.build();
return result;
}
So far so good. Or I don't understand the problem, or you are not clear where it is.
Anyway you always can proxy any Spring Service with the AOP, since it looks like you are pointing to the code:
Response response = port
.getSomething(request);
When this (or similar) method is called, some MethodInterceptor can perform desired tracing logic and send result to some MessageChannel for further analysis or anything else to do:
public Object invoke(MethodInvocation invocation) throws Throwable {
// Extract required operation name and start_date from the MethodInvocation
Object result = invocation.proceed();
// Extract required data from the response
// Build message and send to the channel
return result;
}

spring mvc integrate with reactive stream

i have a RESTful api application build on spring mvc.
recently i was doing something integration between spring mvc and reactive stream (like rxjava and project-reactor) and try to make the application more reactive.
i have just build some demo like this below:
1.for rxjava,i use PublishSubject
private SerializedSubject<StreamResult, StreamResult> subject = PublishSubject.<StreamResult>create().toSerialized();
public ReactiveStreamController() {
this.subject.subscribe(streamResult -> {
String id = streamResult.getRequest().getParameter("id");
System.out.println("[" + Thread.currentThread().getName() + "] request received. id = " + id);
String random = StringUtils.isBlank(id) ? StringUtils.EMPTY : id;
ResponseVO vo = new ResponseVO(200, "success = " + random);
streamResult.getFuture().complete(vo);
}, Throwable::printStackTrace);
}
#ResponseBody
#RequestMapping(value = "/rxJava", method = RequestMethod.GET)
public CompletableFuture<ResponseVO> rxJavaController(HttpServletRequest httpServletRequest) {
StreamResult sr = new StreamResult();
sr.setRequest(httpServletRequest);
subject.onNext(sr);
return sr.getFuture();
}
2.for project reactor
#ResponseBody
#RequestMapping(value = "/reactorCodeNew", method = RequestMethod.GET)
public CompletableFuture<ResponseVO> reactorCoreNewParadigm(HttpServletRequest servletRequest) {
Mono<ResponseVO> mono = Mono.just(servletRequest)
.subscribeOn(executorService)
.map(request -> {
String id = request.getParameter("id");
System.out.println("[" + Thread.currentThread().getName() + "] request received. id = " + id);
String random = StringUtils.isBlank(id) ? StringUtils.EMPTY : id;
ResponseVO vo = new ResponseVO(200, "success = " + random);
return vo;
})
.timeout(Duration.ofSeconds(2), Mono.just(new ResponseVO(500, "error")));
return mono.toCompletableFuture();
}
while running both the demos, i don't quite see too many difference between just using a java's CompletableFuture to supply among the controller method.
what i understand reactive stream and what i want is treating the servlet request as a stream and cosume it with some feature like backpressure.
i wanna know:
1. is there a better way to make the application more reactive?
2. is it correct or compatible to integrate spring mvc with reactive streams? if yes, how can i performce feature like backpressure?
i realize maybe i forgot to declare why/how i return a completablefuture in the controller, actually i inject a customized MethodReturnValueHandler to transform the CompletableFuture to DefferdResult.
public class CompletableFutureMethodReturnValueHandler extends DeferredResultMethodReturnValueHandler {
#Override
public boolean supportsReturnType(MethodParameter returnType) {
return CompletableFuture.class.isAssignableFrom(returnType.getParameterType());
}
#Override
public void handleReturnValue(Object returnValue, MethodParameter returnType, ModelAndViewContainer mavContainer, NativeWebRequest webRequest) throws Exception {
CompletableFuture<?> completableFuture = (CompletableFuture<?>) returnValue;
super.handleReturnValue(CompletableDeferredResult.newInstance(completableFuture), returnType, mavContainer, webRequest);
}
}
Spring MVC is based on the Servlet API and is mostly blocking internally, so it cannot leverage reactive streams behavior. Writing adapters for the Controller layer won't be enough.
The Spring team is working on a separate initiative for this purpose. Follow SPR-14161 and the Spring blog (including this and this) to know more about reactive Spring.

Categories