Generate multiple from() dynamically Apache Camel RouteBuilder - java

I was using camel-core 2.24.1 and was able to do the following:
from( sources.toArray(new String[0]) )
where sources is a list of URIs that I get from a configuration settings. I am trying to update the code to use Camel 3 (camel-core 3.0.0-RC2) but the method mentioned above was removed and I can't find another way to accomplish the same.
Basically I need something like:
from( String uri : sources )
{
// add the uri as from(uri) before continuing with the route
}
In case this would help understand better, the final route should look like:
from( sources.toArray(new String[0]) )
.routeId(Constants.ROUTE_ID)
.split().method(WorkRequestSplitter.class, "splitMessage")
.id(Constants.WORK_REQUEST_SPLITTER_ID)
.split().method(RequestSplitter.class, "splitMessage")
.id(Constants.REQUEST_SPLITTER_ID)
.choice()
.when(useReqProc)
.log(LoggingLevel.INFO, "Found the request processor using it")
.to("bean:" + reqName)
.endChoice()
.otherwise()
.log(LoggingLevel.ERROR, "requestProcessor not found, stopping route")
.stop()
.endChoice()
.end()
.log("Sending the request the URI")
.recipientList(header(Constants.HDR_ARES_URI))
.choice()
.when(useResProc)
.log(LoggingLevel.INFO, "Found the results processor using it")
.to("bean:" + resName)
.endChoice()
.otherwise()
.log(LoggingLevel.INFO, "resultProcessor not found, sending 'as is'")
.endChoice()
.end()
.log("Sending the request to all listeners")
.to( this.destinations.toArray( new String[0] ) );
Any help will be greatly appreciated.

This feature was removed with no direct replacement in CAMEL-6589.
See Migration guide:
In Camel 2.x you could have 2 or more inputs to Camel routes, however this was not supported in all use-cases in Camel, and this functionality is seldom in use. This has also been deprecated in Camel 2.x. In Camel 3 we have removed the remaining code for specifying multiple inputs to routes, and its now only possible to specify exactly only 1 input to a route.
You can always split your route definition to logical blocks with Direct endpoint. This can be also generated dynamically with for-each.
for(String uri : sources){
from(uri).to("direct:commonProcess");
}
from("direct:commonProcess")
.routeId(Constants.ROUTE_ID)
//...
.log("Sending the request to all listeners")
.to(this.destinations.toArray(new String[0]));

Related

Which EIP to use: retrieve config by message content, mapping, filtering and OAuth then send, retry only send part

I am new to Camel, and my use case is as such:
We receive messages from AMQ, and we want to remap this message, and sends this message to different endpoints of customer
each customer has config of what fields to include, and urls of OAuth + url to send message(REST apis) + credentials
Customers are grouped under agents, one agent can dominate several customers. We have config in a Map, organized by "agentId" as key, and list of "customerConfigs" as value.
By one field in the message, we decide which agent this message should send to
And then, we iterate all customers under that agent, checking what fields each one need, and remap message accordingly
We also filter by checking if the message content meets customer's criteria. If yes, we do OAuth against the OAuth url of that customer, and send message to them. If not, skip.
We are doing it with Camel, and by now, all steps from receiving to mapping and retrieving configs and so on, is defined in a bean.(.bean(GeneralBean.class)). It works.
But now, we want to retry against customer endpoints, and I decide to separate steps into several Camel steps, because I don't want to retry the whole receiving/remapping/retrieving configs like now. I just want to retry last step, which is sending.
Now comes the question: which Camel component should I use?
I think recipient list is good, but not sure how. Maybe "Dynamic router" is better?
When defining steps, when I am retrieving the config of each customer, one object in the exchange body(let's call it RemappedMessage) becomes two (RemappedMessage and a list of CustomerConfig). They have one to many relationship. How do I pass down these two objects to next bean? Or should I process them together in one bean? In the Exchange? In #ExchangeProperties Map<String, Object> properties? The latter works, but IMO is not very Camel. Or define a tuple class to combine them? I use it a lot but think it's ugly.
I don't think there is some syntax in Camel to get some properties of object in the Exchange and put it into the to() as url and as basic credentials username and password?
In general, I want to divide the process into several steps in a Camel pipeline, but not sure how to deal with "one object split into more objects and they need to go hand in hand to downstream" problem.
I am not using Spring, but Quarkus.
Now, I am with:
from("activemq:queue:" + appConfig.getQueueName())
.bean(IncomingMessageConverter.class) // use class form so that Camel will cache the bean
.bean(UserIdValidator.class) // validate and if wrong, end route here
.bean(CustomerConfigRetrieverBean.class) // retrieve config of customer, by agent id. How to pass down both??
.bean(EndpointFieldsTailor.class) // remove fields if this customer is not interested. Needs CustomerConfig
.recipientList(xxxxxx) // how?
// what's next?
Because RemappedMessage is the return type of the step .bean(IncomingMessageConverter.class), afterwards Camel can bind args to it so I can have access to the mapped message. But obviously I cannot return 2 objects together.
Recipient List is ideal when you want to send the same message to multiple endpoints and you know what those endpoints are before entering the Recipient List.
The Dynamic Router can route messages to a number of endpoints, the list and order of which not necessarily known at the time the router is entered. Since you have a one-to-many situation, Dynamic Router may be a better fit.
A simpler approach that might work is to prepare a List of tuples. Each tuple would contain a CustomerConfig and a RemappedMessage. You would then split the list and in the splitter send the message to the agent. For the tuple you could use something like ImmutablePair, a Map, or just a two-element List.
As for setting a url and username/password, I'll assume you're using Camel's HTTP component. It's probably best to provide these values as headers since the http component allows this. The URL can be set with the CamelHttpUri header. And the HTTP component will generally pass message headers as HTTP headers, so you can set things like the Authorization header just by setting a message header with the same name.
And there's definitely support for setting headers from values in the exchange and message. E.g.
// assumes the body is a List in which the first element is a CustomerConfig
.setHeader("Authorization", simple("${body[0].authValue}"))
In reality, you'll probably have to do a little bit more work for authorization. E.g, if it's basic auth, you'll have to compute the value you want to use. I'd set that in a header or property and then refer to it like:
.setHeader("Authorization", simple("${header.basicAuthValue}"))
// or
.setHeader("Authorization", simple("${exchangeProperty.basicAuthValue}"))
At last the route looks very complicated, but each bean only does one thing.
// main route, split to direct:split root and return aggregated list of EventResponse
from("activemq:queue:" + getQueue())
// note here unmarshall is manual but also use Jsonb
.bean(EventUnmarshaller.class) // use class form so that Camel will cache the bean
.setProperty("skipAck", simple("${body.skipAck}"))
.bean(SomeFieldValidator.class)
.bean(ConfigDetailsEnricher.class)
.split(body(), (oldExchange, newExchange) -> { // reduce each new EventResponse into the result list
if (oldExchange == null) {
// the first time we aggregate we only have the new exchange,
// so we just return it
return newExchange;
}
List<EventResponse> list = oldExchange.getIn().getBody(List.class);
EventResponse newElement = newExchange.getIn().getBody(EventResponse.class);
list.add(newElement);
oldExchange.getIn().setBody(list);
return oldExchange;
})
.to("direct:individual")
.end() // end split
.to("direct:replyRoute");
// split route, all beans here only deals with one EventSpecificConfig; after all convert to EventResponse
from("direct:individual")
.filter().method(CriteriaFilter.class)
.bean(AnotherFieldsTailor.class)
.bean(RestOperationEnricher.class)
.choice()
.when(simple("${body.restOperation.name} != 'DELETE'"))
.bean(NonDeleteOperationEventFieldsTailor.class)
.end()
.bean(HttpMethodEnricher.class)
.bean(TokenRetriever.class) // call oauth endpoint; as here a cache is used for token so have to use a bean. Set token to EventSpecificConfig
// inject url into header here; as when reaching "toD()", the body already has change to type Event, not EventSpecificConfig
.bean(EventApiConfiguration.class) // after this step, body is already new event(PUT, POST) / null(DELETE)
.choice()
.when(simple("${body} != null"))
.marshal().json(JsonLibrary.Jsonb)
.endChoice()
.end()
.toD("${header.url}", 10) // only cache at most 10 urls
;
from("direct:replyRoute") // list of EventResponse
.bean(EventResponsesProcessor.class)
.log(LoggingLevel.DEBUG, "Reply: " + simple("${body}"))
.setHeader("from", constant("MYAPP"));
And for the api configuration:
#Handler
void configure(EventSpecificConfig eventSpecificConfig, Exchange exchange) {
ConfigDetail configDetail = eventSpecificConfig.getConfigDetail();
String httpMethodUpperCase = eventSpecificConfig.getHttpMethod().getMethodName();
SubscriptionEvent newEvent = eventSpecificConfig.getNewEvent();
Message message = exchange.getIn();
message.setHeader("Authorization", "Bearer " + eventSpecificConfig.getToken());
message.setHeader(Exchange.HTTP_METHOD, httpMethodUpperCase);
message.setHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON);
if (HTTP_METHODS_WITH_PATH.contains(httpMethodUpperCase)) {
message.setHeader(Exchange.HTTP_PATH, "/" + newEvent.getImsi());
}
if (HTTP_METHODS_WITH_BODY.contains(httpMethodUpperCase)) {
message.setHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON);
message.setBody(newEvent);
} else { // DELETE
message.setBody(null); // DELETE has no body
// more logic of setting headers for HTTP query params depending on config values
}
message.setHeader("url", configDetail.url()); // toD() has no access to EventSpecificConfig, only header works
}
I find out that choice().when().endchoice().otherwise().endchoice().end() needs a to() in every branch, I cannot when().setHeader().endchoice(). So at last I decide to put all this logic into a bean.
And you must use end() to end choice(), you cannot use endchoice(); endchoice() is for when() and otherwise(). Quite misleading.
choice()
.when(simple("${header.foo} == 'bar'"))
.to("x")
.endchoice()
.otherwise()
.to("y")
.endchoice()
.end()
I thought endchoice() is on the same level as choice(). Why it's not named like endbranch()?

Loosing request body on the doCatch after exception Apache Camel

I'm having an issue on route configuration with Apache Camel.
For some reason, after falling into an exception, I'm loosing the ${body}, it's just simply wiping out. Is there any way to keep what I have on ${body} if my request fail?
This is the chunk of code:
.to(pricingDomainUrl + "?bridgeEndpoint=true").process(PRICING_DOMAIN_RESPONSE_PROCESSOR)
.doCatch(Exception.class)
.doTry()
.setProperty("exception", simple("true"))
.log(LoggingLevel.INFO, logger, "BODY ON TRY: ${body}")<- prints an empty body
.log(LoggingLevel.WARN, "First call failed, trying cleaning headers")
.process(PRICING_DOMAIN_REQUEST_PROCESSOR)
.doCatch(Exception.class)
.process(EXCEPTION_PROCESSOR);

Apache Camel REST endpoint not returning the final body

I've declared a REST endpoint, which calls another route using direct. In the end of the second route I'm logging the body but it's not the same body returned to the browser.
Here's a small example reproducing the behavior (I'm using Apache Camel with Spring Boot):
#Component
public class EntidadeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
restConfiguration().bindingMode(json);
rest("/entidade")
.get("/{id}")
.to("direct:search-by-id");
from("direct:search-by-id")
.routeId("search-by-id")
.setProperty("idEntidade", simple("${header.id}"))
.pollEnrich("file:files/mock/dados?noop=true&idempotent=false")
.unmarshal().json(JsonLibrary.Jackson)
.split(jsonpath("$"))
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]"))
.marshal().json(JsonLibrary.Jackson)
.log("${body}");
}
}
I'm calling it from the browser, using the URL:
http://localhost:8080/camel/entidade/1 .
On the folder files/mock/dados I have a single file, called entidades.json where there's a JSON Array (see below).
I know the split and filter are working because I'm logging the body in that last line of code and this is what appears in the log:
2021-04-28 18:15:15.707 INFO 3905542 --- [nio-8080-exec-1] search-by-id : {"id":"1","tipo":"PF","nome":"João Maria"}
But this is what is returned to the browser (the exact content of the entidades.json file):
[{"id":"1","tipo":"PF","nome":"João Maria"},{"id":"2","tipo":"PF","nome":"Maria João"},{"id":"3","tipo":"PF","nome":"João Silva"},{"id":"4","tipo":"PF","nome":"José Souza"}]
Why the logged body is not the same that shows in the browser and to fix it?
PS: If I remove those marshal and unmarshal calls, I get the following error in the browser:
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.apache.camel.component.file.FileBinding and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: org.apache.camel.component.file.GenericFile["binding"])
Without the input-file and knowing the concrete URI you called (including given value for path-variable {id}), I can only suspect some issues as follows:
Did you provide an id at the end of the GET-call ?
Why did you convert the JSON ?
Did you test the correct split ? Did you aggregate again ?
Did you want to log each message ?
REST endpoints
You specified the endpoint as GET /entidada/{id}.
The {id} is a path-variable.
So let's assume you call GET /entidata/1 with 1 as ID.
Then your JSON file is polled (read), unmarshalled to ... ?
JSON unmarshal/marshal
These unmarshal and marshal methods are either used to convert between different data-formats or from data-representation to Java-objects (POJOs) if they are internally used (e.g. to pass to a processor Bean etc.).
I suppose the file dados contains textual data as JSON.
So you can simply read this text into a (textual!) message body (like a text-message, compare JMS etc.) and work with it: (a) split by JSON-path, (b) filter by JSON-path, (c) log this JSON result, (d) send it back as HTTP-response to the calling client (e.g. your browser).
What happens on the split?
After this you try to split (assuming you have a JSON-array):
// incoming:
// a single enriched message, in body: JSON-array with 4 elements
.split(jsonpath("$")) // what do you expect as output ?
// here the split usually should be ended using `.end` DSL method
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]")) // each array element of JSON body matched against id, e.g. 1
I filled your HTTP-response (JSON array with the 4 people) into online JSON-path evaluator. Evaluation of $ was not a split, but a single element (inside the result array): exactly the original array (with 4 people).
Why wasn't the file splitted into 4 messages, each containing a person-object?
Because your JSON-path $ denotes simply the root-element.
Plus: Usually after the .split() there follows an .end() which aggregates them again.
You left that out. I suppose that is an issue, too.
Filter works as expected
Later you filtered on the REST-given id:
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]"))`
This results in the logged element:
{"id":"1","tipo":"PF","nome":"João Maria"}
The filter worked successfully: just leaving a single one with id 1.
Logging in Camel
Log EIP
When adding .log() to the route chain, this means you are using the Log EIP. In its documentation a warning Tip is given:
Logging message body with streamed messages:
If the message body is stream based, then logging the message body, may cause the message body to be empty afterwards. See this FAQ. For streamed messages you can use Stream caching to allow logging the message body and be able to read the message body afterwards again.
So your empty log message may be caused by a side-effect when using this for logging stream-based message bodies.
Difference of log methods
Explained in section Difference between log in the DSL and Log component:
The log DSL is much lighter and meant for logging human logs such as Starting to do …​ etc.
Below example (adjusted to your REST route) illustrates its usage:
rest("/entidade")
.get("/{id}")
.log("Processing ${id}")
.to("bean:foo");
Log component
I would suggest using the standard Log component by simply using .to() DSL passing a URI string based on schema log: together with the required parameter loggerName.
.to("log:filtered-json")
Here the URI-prefix for Log component is log:. Each message of the stream is logged using loggerName filtered-json.
The error was I need to pass an AggregationStrategy to the split. I also need to stop logging the body, because it was consuming the InputStream. After that, I could safely remove those marshal and unmarshal calls.
This is the final code:
#Component
public class EntidadeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
restConfiguration().bindingMode(json);
rest("/entidade")
.get("/{id}")
.to("direct:search-by-id");
from("direct:search-by-id")
.routeId("search-by-id")
.setProperty("idEntidade", simple("${header.id}"))
.pollEnrich("file:files/mock/dados?noop=true&idempotent=false")
.split(jsonpath("$"), takeFirst(Exchange.FILTER_MATCHED))
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]")).stop()
.end();
}
private AggregationStrategy takeFirst(String matchProperty) {
return ((oldExchange, newExchange) -> {
if (oldExchange == null && newExchange.getProperty(matchProperty, Boolean.class)) {
oldExchange = newExchange;
}
return oldExchange;
});
}
}

How to log request payload when using Camel Rest?

I'd like to log the original 'raw' request body (e.g. JSON) while using Camel Rest endpoints. What's the proper way to do this?
My setup (RouteBuilder) looks like this:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.type(MyServiceRequest.class)
.outType(MyServiceResponse.class)
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.serviceProcessor);
My problem here is that the mechanics such as storing the request as an Exchange property are 'too late' in terms of using this approach, any processors are too late in the route, i.e., the binding already took place and consumed the Request. Also the CamelHttpServletRequest's InputStream has already been read and contains no data.
The first place to use the log EIP is directly before the single processor:
from(SERVICE_CONTEXT_IN).log(LoggingLevel.INFO, "Request: ${in.body}")
.process(this.serviceProcessor);
but at that point the ${in.body} is already an instance of MyServiceRequest. The added log above simply yields Request: x.y.z.MyServiceRequest#12345678. What I'd like to log is the original JSON prior to being bound to a POJO.
There seems to be no built-in way of enabling logging of the 'raw' request in RestConfigurationDefinition nor RestDefinition.
I could get rid of the automatic JSON binding and manually read the HTTP Post request's InputStream, log and perform manual unmarshalling etc. in a dedicated processor but I would like to keep the built-in binding.
I agree there is no way to log the raw request (I assume you mean the payload going through the wire before any automatic binding) using Camel Rest endpoints.
But taking Roman Vottner into account, you may change your restConfiguration() as follows:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.componentProperty("handlers", "#yourLoggingHandler")
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
where your #yourLoggingHandler needs to be registered in your registry and implement org.eclipse.jetty.server.Handler. Please take a look at writing custom handlers at Jetty documentation http://www.eclipse.org/jetty/documentation/current/jetty-handlers.html#writing-custom-handlers.
In the end I 'solved' this by not using the REST DSL binding with a highly sophisticated processor for logging the payload:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.requestLogProcessor)
.unmarshal()
.json(JsonLibrary.Jackson, MyServiceRequest.class)
.process(this.serviceProcessor)
.marshal()
.json(JsonLibrary.Jackson);
All the requestLogProcessor does is to read the in body as InputStream, get and log the String, and eventually pass it on.
You can solve this by:
Turning the RestBindingMode to off on your specific route and logging the incoming request string as is.
After which you can convert the JSON string to your IN type object using ObjectMapper.
At the end of the route convert the java object to JSON and put it in the exchange out body, as we turned off the RestBindingMode.
rest("myService/").post()
.bindingMode(RestBindingMode.off)
.to(SERVICE_CONTEXT_IN);
In my case, streamCaching did the trick because the Stream was readable only once. Thefore I was able log but was not able to forward the body any more. I hope this might be of help to someone

Apache Camel - Pass Filename from route1 to route2 which FTPs

I have a requirement of generating a file from webservice and FTP to a location.
Route1:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to(fileDirLoc + "?fileName=RMA_OUT_${date:now:MMddyyyy_HHmmss}.csv&noop=true");
Route 2: FTP Route
from("file://"+header("CamelFileNameProduced"))
.routeId("ftpRoute")
.to("sftp://FTP_HOST/DIR?username=???&password=???)
To start the route
Exchange exchange = template.request("direct:start", null);
Object filePathObj = exchange.getIn().getHeader("CamelFileNameProduced");
if (filePathObj != null) { // Makesure Route1 has created the file
camelContext.startRoute("ftpRoute"); // Start FTP route
template.send(exchange); // Send exchange from Route1 to Route2
}
The above code worked when I hard-coded the location in FTP route.
Can someone please help, how can I pipeline these 2 routes and pass output of Route 1 ("File Name") to Route2 for FTP?
You cannot pass headers to the file endpoint, it just doesn't work like that. Also, from("file://...") cannot contain dynamic values in its path, i.e. placeholders of any kind, here's a quote from the official Camel documentation:
Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option e.g., by setting fileName=thefilename. Also, the starting directory must not contain dynamic expressions with ${} placeholders. Again use the fileName option to specify the dynamic part of the filename.
My suggestion would be to either send to FTP directly, if you are not doing any additional CSV file processing:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to("sftp://FTP_HOST/DIR?username=???&password=??&fileName=RMA_OUT_${date:now:MMddyyyy_HHmmss}.csv");
Or to change Route 2 definition from file to direct:
from("direct:ftp-send")
.routeId("ftpRoute")
.pollEnrich("file:destination?fileName=${headers.CamelFileNameProduced}")
.to("sftp://FTP_HOST/DIR?username=???&password=??&fileName=${headers.CamelFileName}")
or to change the definition of Route 2 to pick up only the generated files:
from("file://" + fileDirLoc + "?antInclude=RMA_OUT_*.csv")
.routeId("ftpRoute")
.to("sftp://FTP_HOST/DIR?username=???&password=???)
Can't the ftpRoute simply poll fileDirLoc for new files ?
There is a workaround, you can try to combine them:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to(fileUri.getUri())
.setHeader(Exchange.FILE_NAME, file.getName())
.to("sftp://FTP_HOST/DIR?username=???&password=???);
Yes, you cannot have dynamic expressions in the file uri but you can generate the uri and filename somewhere else. Say a utility method or something and refer to it here.

Categories