How to create a TextFree search in mongoDB with Micronaut - java

I am using reactive MongoDb, and trying to implement Free Text search based on the weight
implementation("io.micronaut.mongodb:micronaut-mongo-reactive")
on below POJO
public class Product {
#BsonProperty("_id")
#BsonId
private ObjectId id;
private String name;
private float price;
private String description;
}
Tried this simple example
public Flowable<List<Product>> findByFreeText(String text) {
LOG.info(String.format("Listener --> Listening value = %s", text));
Flowable.fromPublisher(this.repository.getCollection("product", List.class)
.find(new Document("$text", new Document("$search", text)
.append("$caseSensitive", false)
.append("$diacriticSensitive", false)))).subscribe(item -> {
System.out.println(item);
}, error -> {
System.out.println(error);
});
return Flowable.just(List.of(new Product()));
}
I don't think this is the correct way of implementing the Free Text Search.

At first you don't need to have Flowable with List of Product because Flowable can manage more then one value unlike Single. So, it is enough to have Flowable<Product>. Then you can simply return the Flowable instance from find method.
Text search can be then implemented like this:
public Flowable<Product> findByFreeText(final String query) {
return Flowable.fromPublisher(repository.getCollection("product", Product.class)
.find(new Document("$text",
new Document("$search", query)
.append("$caseSensitive", false)
.append("$diacriticSensitive", false)
)));
}
Then it is up to the consumer of the method how it subscribes to the result Flowable. In controller you can directly return the Flowable instance. If you need to consume it somewhere in your code you can do subscribe() or blockingSubscribe() and so on.
And you can of course test it by JUnit like this:
#MicronautTest
class SomeServiceTest {
#Inject
SomeService service;
#Test
void findByFreeText() {
service.findByFreeText("test")
.test()
.awaitCount(1)
.assertNoErrors()
.assertValue(p -> p.getName().contains("test"));
}
}
Update: you can debug communication with MongoDB by setting this in logback.xml (Micronaut is using Logback as a default logging framework) logging config file:
<configuration>
....
<logger name="org.mongodb" level="debug"/>
</configuration>
Then you will see this in the log file:
16:20:21.257 [Thread-5] DEBUG org.mongodb.driver.protocol.command - Sending command '{"find": "product", "filter": {"$text": {"$search": "test", "$caseSensitive": false, "$diacriticSensitive": false}}, "batchSize": 2147483647, "$db": "some-database"}' with request id 6 to database some-database on connection [connectionId{localValue:3, serverValue:1634}] to server localhost:27017
16:20:21.258 [Thread-8] DEBUG org.mongodb.driver.protocol.command - 16:20:21.258 [Thread-7] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 6 completed successfully in 2.11 ms on connection [connectionId{localValue:3, serverValue:1634}] to server localhost:27017
Then you can copy the command from log and try it in MongoDB CLI or you can install MongoDB Compass where you can play with that more and see whether the command is correct or not.

Related

Long Flux sometimes not complete

My need is to transfer some item from a not reactive repository to a reactive repository(Firestore).
The procedure is triggered from a REST endpoint exposed with Netty.
The code below is what I've written after some trial and errors.
The query from the non reactive repo is not long (~20sec) but it returns a lot of records and the execution time is usually ~60min.
All records are always saved, all "Saving in progress... XXX" are printed, but about 50% if the times, it will not print "Saved XXX records" and no errors are printed.
Things I've noticed:
more records -> higher probability of fails
it does not depends on the execution time (sometimes longer process than the failed ones completes)
The app runs on a k8s pod with 1500Mi RAM request and 3000Mi limit, from the graphs it never approaches the limit.
What I'm missing here?
#Slf4j
#RestController
#RequestMapping("/import")
public class ImportController {
#Autowired
private NotReactiveRepository notReactiveRepository;
#Autowired
private ReactiveRepository reactiveRepository;
private static final Scheduler queryScheduler = Schedulers.newBoundedElastic(1, 480, "query", 864000);// max 10 days processing time
#GetMapping("/start")
public Mono<String> start() {
log.info("Start");
return Mono.just("RECEIVED")
//fire and forget
.doOnNext(stringRouteResponse -> startProcess().subscribe());
}
private Mono<Long> startProcess() {
Mono<List<Items>> resultsBlockingMono = Mono
.fromCallable(() -> notReactiveRepository.findAll())
.subscribeOn(queryScheduler)
.retryWhen(Retry.backoff(5, Duration.of(2, ChronoUnit.SECONDS)));
return resultsBlockingMono
.doOnNext( records -> log.info("Records: {}", records.size()))
.flatMapMany(Flux::fromIterable)
.map(ItemConverter::convert)
// max 9000 save/sec
.delayElements(Duration.of(300, ChronoUnit.MICROS))
.flatMap(this::saveConvertedItem)
.zipWith(Flux.range(1, Integer.MAX_VALUE))
.doOnNext(savedAndIndex -> log.info("Saving in progress... {}", savedAndIndex.getT2()))
.count()
.doOnNext( numberOfSaved -> log.info("Saved {} records", numberOfSaved));
}
private Mono<ConvertedItem> saveConvertedItem(ConvertedItem convertedItem) {
return reactiveRepository.save(convertedItem)
.retryWhen(Retry.backoff(1000, Duration.of(2, ChronoUnit.MILLIS)))
.onErrorResume(throwable -> {
log.error("Resuming");
return Mono.empty();
})
.doOnError(throwable -> log.error("Error on save"));
}
}
Update:
As requested, this is the last output of the procedure, where should be "Saved 1131113 records" and with .log() before .count() (the output after the onNext always prints after the process, also on success):
"Saving... 1131113"
"| onNext([ConvertedItem(...),1131113])"
"Shutting down ExecutorService 'pubsubPublisherThreadPool'"
"Shutting down ExecutorService 'pubSubAcknowledgementExecutor'"
"Shutting down ExecutorService 'pubsubSubscriberThreadPool'"
"Closing JPA EntityManagerFactory for persistence unit 'default'"
"HikariPool-1 - Shutdown initiated..."
"HikariPool-1 - Shutdown completed."

Spring batch job status FAILED when all Steps COMPLETED

I have a spring batch job which uses flow:
Flow productFlow = new FlowBuilder<Flow>("productFlow")
.start(productFlow)
.next(new MyDecider()).on("YES").to(anotherFlow)
.build();
After I started to use a decider which checks some value in Jobparameter from job execution to decide whether to run the next flow or not, I am no lo longer getting COMPLETED as overall job status in JobExecution. It comes as FAILED.
However, every step in the STEP EXECUTION Table are COMPLETED and none FAILED.
Have I missed a trick somewhere?
My Decider is looks like this:
public class AnotherFlowDecider implements JobExecutionDecider {
#Override
public AnotherFlowDecider decide(final JobExecution jobExecution, final StepExecution stepExecution) {
final JobParameters jobParameters = jobExecution.getJobParameters();
final String name = jobParameters.getString("name");
if (nonNull(name)) {
switch (name) {
case "A":
return new FlowExecutionStatus("YES");
case "B":
default:
return new FlowExecutionStatus("NO");
}
}
throw new MyCustomException(FAULT, "nameis not provided as a JobParameter");
}
}
in Debug mode I can see
2020-12-11 11:10:58.145 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Completed state=productFlow.stageProduct with status=COMPLETED
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Handling state=productFlow.decision0
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Completed state=productFlow.decision0 with status=NO
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Handling state=productFlow.FAILED

Can't perform PUT command in PostMan

I am new in the world of Spring Boot and MongoDB so this could be a stupid question.
I created a Spring Boot project linked to a MondoDB database. In the Controller I've defined these methods: get, getAll, add, update and delete.
Everything works fine while I test my app on PostMan, except for the update method. Indeed in PostMan, using the PUT command, I get this error:
"status": 405,
"error": "Method Not Allowed"
Looking for a solution, I found these lines in PostMan:
PUT non allowed
where the value of "Allow"only contains "GET, DELETE" and not PUT.
Maybe this fact is linked to my error? How can I fix it?
Thank you and sorry for my bad english and lack of knowledge of SpringBoot!
EDIT 1: Controller code:
#PutMapping("/{id}")
public ResponseEntity <Cliente> updateCliente(#PathVariable(value = "id") String id, #RequestBody Cliente cliente){
Optional<Cliente> c = clienteRepo.findById(id);
Cliente _c = new Cliente();
if(c.isPresent()) {
_c = c.get();
_c.setId(cliente.getId());
_c.setNome(cliente.getNome());
}
final Cliente updatedCliente = clienteRepo.save(_c);
return ResponseEntity.ok(updatedCliente);
}
EDIT 2: PostMan request:
PostMan
You can check the mapped api in the log by adding the following config in the application.properties file:
logging.level.org.springframework.web.servlet.mvc.method.annotation=TRACE
For example:
I have a controller:
#RestController
#RequestMapping("/client")
public class HomeRestController {
#PutMapping("/{id}")
public void put(#PathVariable(value = "id") String id, #RequestBody TestingModel model) {
System.out.println(id);
System.out.println(model.getName());
}
}
When starting the application you can see the mapped API in the console log as below:
2020-07-14 09:36:49.287 TRACE 13224 --- [ restartedMain] s.w.s.m.m.a.RequestMappingHandlerMapping :
c.e.e.c.HomeController:
{ /index}: home()
2020-07-14 09:36:49.288 TRACE 13224 --- [ restartedMain] s.w.s.m.m.a.RequestMappingHandlerMapping :
c.e.e.c.HomeRestController:
{PUT /client/{id}}: put(String,TestingModel)
2020-07-14 09:36:49.293 TRACE 13224 --- [ restartedMain] s.w.s.m.m.a.RequestMappingHandlerMapping :
o.s.b.a.w.s.e.BasicErrorController:
{ /error}: error(HttpServletRequest)
{ /error, produces [text/html]}: errorHtml(HttpServletRequest,HttpServletResponse)

Spring Cloud Stream Multi Topic Transaction Management

I'm trying to create a PoC application in Java to figure out how to do transaction management in Spring Cloud Stream when using Kafka for message publishing. The use case I'm trying to simulate is a processor that receives a message. It then does some processing and generates two new messages destined to two separate topics. I want to be able to handle publishing both messages as a single transaction. So, if publishing the second message fails I want to roll (not commit) the first message. Does Spring Cloud Stream support such a use case?
I've set the #Transactional annotation and I can see a global transaction starting before the message is delivered to the consumer. However, when I try to publish a message via the MessageChannel.send() method I can see that a new local transaction is started and completed in the KafkaProducerMessageHandler class' handleRequestMessage() method. Which means that the sending of the message does not participate in the global transaction. So, if there's an exception thrown after the publishing of the first message, the message will not be rolled back. The global transaction gets rolled back but that doesn't do anything really since the first message was already committed.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
transaction:
transaction-id-prefix: txn.
producer: # these apply to all producers that participate in the transaction
partition-key-extractor-name: partitionKeyExtractorStrategy
partition-selector-name: partitionSelectorStrategy
partition-count: 3
configuration:
acks: all
enable:
idempotence: true
retries: 10
bindings:
input-customer-data-change-topic:
consumer:
configuration:
isolation:
level: read_committed
enable-dlq: true
bindings:
input-customer-data-change-topic:
content-type: application/json
destination: com.fis.customer
group: com.fis.ec
consumer:
partitioned: true
max-attempts: 1
output-name-change-topic:
content-type: application/json
destination: com.fis.customer.name
output-email-change-topic:
content-type: application/json
destination: com.fis.customer.email
#SpringBootApplication
#EnableBinding(CustomerDataChangeStreams.class)
public class KafkaCloudStreamCustomerDemoApplication
{
public static void main(final String[] args)
{
SpringApplication.run(KafkaCloudStreamCustomerDemoApplication.class, args);
}
}
public interface CustomerDataChangeStreams
{
#Input("input-customer-data-change-topic")
SubscribableChannel inputCustomerDataChange();
#Output("output-email-change-topic")
MessageChannel outputEmailDataChange();
#Output("output-name-change-topic")
MessageChannel outputNameDataChange();
}
#Component
public class CustomerDataChangeListener
{
#Autowired
private CustomerDataChangeProcessor mService;
#StreamListener("input-customer-data-change-topic")
public Message<String> handleCustomerDataChangeMessages(
#Payload final ImmutableCustomerDetails customerDetails)
{
return mService.processMessage(customerDetails);
}
}
#Component
public class CustomerDataChangeProcessor
{
private final CustomerDataChangeStreams mStreams;
#Value("${spring.cloud.stream.bindings.output-email-change-topic.destination}")
private String mEmailChangeTopic;
#Value("${spring.cloud.stream.bindings.output-name-change-topic.destination}")
private String mNameChangeTopic;
public CustomerDataChangeProcessor(final CustomerDataChangeStreams streams)
{
mStreams = streams;
}
public void processMessage(final CustomerDetails customerDetails)
{
try
{
sendNameMessage(customerDetails);
sendEmailMessage(customerDetails);
}
catch (final JSONException ex)
{
LOGGER.error("Failed to send messages.", ex);
}
}
public void sendNameMessage(final CustomerDetails customerDetails)
throws JSONException
{
final JSONObject nameChangeDetails = new JSONObject();
nameChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
nameChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
nameChangeDetails.put(KafkaConst.FIRST_NAME_KEY, customerDetails.firstName());
nameChangeDetails.put(KafkaConst.LAST_NAME_KEY, customerDetails.lastName());
final String action = customerDetails.action();
nameChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel nameChangeMessageChannel = mStreams.outputNameDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(nameChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mNameChangeTopic).build());
if ("fail_name_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("Customer name failure!");
}
}
public void sendEmailMessage(final CustomerDetails customerDetails) throws JSONException
{
final JSONObject emailChangeDetails = new JSONObject();
emailChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
emailChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
emailChangeDetails.put(KafkaConst.EMAIL_ADDRESS_KEY, customerDetails.email());
final String action = customerDetails.action();
emailChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel emailChangeMessageChannel = mStreams.outputEmailDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(emailChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mEmailChangeTopic).build());
if ("fail_email_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("E-mail address failure!");
}
}
}
EDIT
We are getting closer. The local transaction does not get created anymore. However, the global transaction still gets committed even if there was an exception. From what I can tell the exception does not propagate to the TransactionTemplate.execute() method. Therefore, the transaction gets committed. It seems like that the MessageProducerSupport class in the sendMessage() method "swallows" the exception in the catch clause. If there's an error channel defined then a message is published to it and thus the exception is not rethrown. I tried turning the error channel off (spring.cloud.stream.kafka.binder.transaction.producer.error-channel-enabled = false) but that doesn't turn it off. So, just for a test I simply set the error channel to null in the debugger to force the exception to be rethrown. That seems to do it. However, the original message keeps getting redelivered to the initial consumer even though I have the max-attempts set to 1 for that consumer.
See the documentation.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*
Global producer properties for producers in a transactional binder. See spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix and Kafka Producer Properties and the general producer properties supported by all binders.
Default: See individual producer properties.
You must configure the shared global producer.
Don't add #Transactional - the container will start the transaction and send the offset to the transaction before committing the transaction.
If the listener throws an exception, the transaction is rolled back and the DefaultAfterRollbackPostProcessor will re-seek the topics/partitions so that the record will be redelivered.
EDIT
There is a bug in the configuration of the binder's transaction manager that causes a new local transaction to be started by the output binding.
To work around it, reconfigure the TM with the following container customizer bean...
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer() {
return (container, dest, group) -> {
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
};
}
EDIT2
You can't use the binder's DLQ support because, from the container's perspective, the delivery was successful. We need to propagate the exception to the container to force a rollback. So, you need to move the dead-lettering to the AfterRollbackProcessor instead. Here is my complete test class:
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57379575Application {
public static void main(String[] args) {
SpringApplication.run(So57379575Application.class, args);
}
#Autowired
private MessageChannel output;
#StreamListener(Processor.INPUT)
public void listen(String in) {
System.out.println("in:" + in);
this.output.send(new GenericMessage<>(in.toUpperCase()));
if (in.equals("two")) {
throw new RuntimeException("fail");
}
}
#KafkaListener(id = "so57379575", topics = "so57379575out")
public void listen2(String in) {
System.out.println("out:" + in);
}
#KafkaListener(id = "so57379575DLT", topics = "so57379575dlt")
public void listen3(String in) {
System.out.println("dlt:" + in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("so57379575in", "one".getBytes());
template.send("so57379575in", "two".getBytes());
};
}
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(
KafkaTemplate<Object, Object> template) {
return (container, dest, group) -> {
// enable transaction synchronization
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
// container dead-lettering
DefaultAfterRollbackProcessor<? super byte[], ? super byte[]> afterRollbackProcessor =
new DefaultAfterRollbackProcessor<>(new DeadLetterPublishingRecoverer(template,
(ex, tp) -> new TopicPartition("so57379575dlt", -1)), 0);
container.setAfterRollbackProcessor(afterRollbackProcessor);
};
}
}
and
spring:
kafka:
bootstrap-servers:
- 10.0.0.8:9092
- 10.0.0.8:9093
- 10.0.0.8:9094
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
isolation.level: read_committed
cloud:
stream:
bindings:
input:
destination: so57379575in
group: so57379575in
consumer:
max-attempts: 1
output:
destination: so57379575out
kafka:
binder:
transaction:
transaction-id-prefix: so57379575tx.
producer:
configuration:
acks: all
retries: 10
#logging:
# level:
# org.springframework.kafka: trace
# org.springframework.transaction: trace
and
in:two
2019-08-07 12:43:33.457 ERROR 36532 --- [container-0-C-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Exception thrown while
...
Caused by: java.lang.RuntimeException: fail
...
in:one
dlt:two
out:ONE

Spring Cloud Gateway for composite API calls?

I am starting to build a Microservice API Gateway, and I am considering Spring Cloud to help me with the routing. But some calls to the Gateway API will need multiple requests to different services.
Lets say I have 2 services: Order Details Service and Delivery Service. I want to have a Gateway endpoint GET /orders/{orderId} that makes a call to Order Details service and then Delivery Service and combine the two to return full Order details with delivery. Is this possible with the routing of Spring cloud or should I make these by hand using something like RestTemplate to make the calls?
There is an enhancement proposal posted on GitHub to have routes support multiple URIs. So far, there aren't any plans to implement this yet, at least, not according to one of the contributors.
As posted in the Spring Cloud Gateway Github issue mentioned by g00glen00b, until the library develops a Filter for this, I resolved it using the ModifyResponseBodyGatewayFilterFactory in my own custom Filter.
Just in case it's useful for anyone else, I provide the base implementation here (it may need some rework, but it should be enough to make the point).
Simply put, I have a "base" service retrieving something like this:
[
{
"targetEntryId": "624a448cbc728123b47d08c4",
"sections": [
{
"title": "sadasa",
"description": "asda"
}
],
"id": "624a448c45459c4d757869f1"
},
{
"targetEntryId": "624a44e5bc728123b47d08c5",
"sections": [
{
"title": "asda",
"description": null
}
],
"id": "624a44e645459c4d757869f2"
}
]
And I want to enrich these entries with the actual targetEntry data (of course, identified by targetEntryId).
So, I created my Filter based on the ModifyResponseBody one:
/**
* <p>
* Filter to compose a response body with associated data from a second API.
* </p>
*
* #author rozagerardo
*/
#Component
public class ComposeFieldApiGatewayFilterFactory extends
AbstractGatewayFilterFactory<ComposeFieldApiGatewayFilterFactory.Config> {
public ComposeFieldApiGatewayFilterFactory() {
super(Config.class);
}
#Autowired
ModifyResponseBodyGatewayFilterFactory modifyResponseBodyFilter;
ParameterizedTypeReference<List<Map<String, Object>>> jsonType =
new ParameterizedTypeReference<List<Map<String, Object>>>() {
};
#Value("${server.port:9080}")
int aPort;
#Override
public GatewayFilter apply(final Config config) {
return modifyResponseBodyFilter.apply((c) -> {
c.setRewriteFunction(List.class, List.class, (filterExchange, input) -> {
List<Map<String, Object>> castedInput = (List<Map<String, Object>>) input;
// extract base field values (usually ids) and join them in a "," separated string
String baseFieldValues = castedInput.stream()
.map(bodyMap -> (String) bodyMap.get(config.getOriginBaseField()))
.collect(Collectors.joining(","));
// Request to a path managed by the Gateway
WebClient client = WebClient.create();
return client.get()
.uri(UriComponentsBuilder.fromUriString("http://localhost").port(aPort)
.path(config.getTargetGatewayPath())
.queryParam(config.getTargetQueryParam(), baseFieldValues).build().toUri())
.exchangeToMono(response -> response.bodyToMono(jsonType)
.map(targetEntries -> {
// create a Map using the base field values as keys fo easy access
Map<String, Map> targetEntriesMap = targetEntries.stream().collect(
Collectors.toMap(pr -> (String) pr.get("id"), pr -> pr));
// compose the origin body using the requested target entries
return castedInput.stream().map(originEntries -> {
originEntries.put(config.getComposeField(),
targetEntriesMap.get(originEntries.get(config.getOriginBaseField())));
return originEntries;
}).collect(Collectors.toList());
})
);
});
});
}
;
#Override
public List<String> shortcutFieldOrder() {
return Arrays.asList("originBaseField", "targetGatewayPath", "targetQueryParam",
"composeField");
}
/**
* <p>
* Config class to use for AbstractGatewayFilterFactory.
* </p>
*/
public static class Config {
private String originBaseField;
private String targetGatewayPath;
private String targetQueryParam;
private String composeField;
public Config() {
}
// Getters and Setters...
}
}
For completeness, this is the corresponding route setup using my Filter:
spring:
cloud:
gateway:
routes:
# TARGET ENTRIES ROUTES
- id: targetentries_route
uri: ${configs.api.tagetentries.baseURL}
predicates:
- Path=/api/target/entries
- Method=GET
filters:
- RewritePath=/api/target/entries(?<segment>.*), /target-entries-service$\{segment}
# ORIGIN ENTRIES
- id: originentries_route
uri: ${configs.api.originentries.baseURL}
predicates:
- Path=/api/origin/entries**
filters:
- RewritePath=/api/origin/entries(?<segment>.*), /origin-entries-service$\{segment}
- ComposeFieldApi=targetEntryId,/api/target/entries,ids,targetEntry
And with this, my resulting response looks as follows:
[
{
"targetEntryId": "624a448cbc728123b47d08c4",
"sections": [
{
"title": "sadasa",
"description": "asda"
}
],
"id": "624a448c45459c4d757869f1",
"targetEntry": {
"id": "624a448cbc728123b47d08c4",
"targetEntityField": "whatever"
}
},
{
"targetEntryId": "624a44e5bc728123b47d08c5",
"sections": [
{
"title": "asda",
"description": null
}
],
"id": "624a44e645459c4d757869f2",
"targetEntry": {
"id": "624a44e5bc728123b47d08c5",
"targetEntityField": "somethingelse"
}
}
]

Categories