I have a requirement of generating a file from webservice and FTP to a location.
Route1:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to(fileDirLoc + "?fileName=RMA_OUT_${date:now:MMddyyyy_HHmmss}.csv&noop=true");
Route 2: FTP Route
from("file://"+header("CamelFileNameProduced"))
.routeId("ftpRoute")
.to("sftp://FTP_HOST/DIR?username=???&password=???)
To start the route
Exchange exchange = template.request("direct:start", null);
Object filePathObj = exchange.getIn().getHeader("CamelFileNameProduced");
if (filePathObj != null) { // Makesure Route1 has created the file
camelContext.startRoute("ftpRoute"); // Start FTP route
template.send(exchange); // Send exchange from Route1 to Route2
}
The above code worked when I hard-coded the location in FTP route.
Can someone please help, how can I pipeline these 2 routes and pass output of Route 1 ("File Name") to Route2 for FTP?
You cannot pass headers to the file endpoint, it just doesn't work like that. Also, from("file://...") cannot contain dynamic values in its path, i.e. placeholders of any kind, here's a quote from the official Camel documentation:
Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option e.g., by setting fileName=thefilename. Also, the starting directory must not contain dynamic expressions with ${} placeholders. Again use the fileName option to specify the dynamic part of the filename.
My suggestion would be to either send to FTP directly, if you are not doing any additional CSV file processing:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to("sftp://FTP_HOST/DIR?username=???&password=??&fileName=RMA_OUT_${date:now:MMddyyyy_HHmmss}.csv");
Or to change Route 2 definition from file to direct:
from("direct:ftp-send")
.routeId("ftpRoute")
.pollEnrich("file:destination?fileName=${headers.CamelFileNameProduced}")
.to("sftp://FTP_HOST/DIR?username=???&password=??&fileName=${headers.CamelFileName}")
or to change the definition of Route 2 to pick up only the generated files:
from("file://" + fileDirLoc + "?antInclude=RMA_OUT_*.csv")
.routeId("ftpRoute")
.to("sftp://FTP_HOST/DIR?username=???&password=???)
Can't the ftpRoute simply poll fileDirLoc for new files ?
There is a workaround, you can try to combine them:
from("direct:start")
.routeId("generateFileRoute")
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple(URL))
.setHeader("Authorization", simple(APP_KEY))
.to(URL)
.unmarshal(listJacksonDataFormat)
.marshal(bindyCsvDataFormat)
.to(fileUri.getUri())
.setHeader(Exchange.FILE_NAME, file.getName())
.to("sftp://FTP_HOST/DIR?username=???&password=???);
Yes, you cannot have dynamic expressions in the file uri but you can generate the uri and filename somewhere else. Say a utility method or something and refer to it here.
Related
I was using camel-core 2.24.1 and was able to do the following:
from( sources.toArray(new String[0]) )
where sources is a list of URIs that I get from a configuration settings. I am trying to update the code to use Camel 3 (camel-core 3.0.0-RC2) but the method mentioned above was removed and I can't find another way to accomplish the same.
Basically I need something like:
from( String uri : sources )
{
// add the uri as from(uri) before continuing with the route
}
In case this would help understand better, the final route should look like:
from( sources.toArray(new String[0]) )
.routeId(Constants.ROUTE_ID)
.split().method(WorkRequestSplitter.class, "splitMessage")
.id(Constants.WORK_REQUEST_SPLITTER_ID)
.split().method(RequestSplitter.class, "splitMessage")
.id(Constants.REQUEST_SPLITTER_ID)
.choice()
.when(useReqProc)
.log(LoggingLevel.INFO, "Found the request processor using it")
.to("bean:" + reqName)
.endChoice()
.otherwise()
.log(LoggingLevel.ERROR, "requestProcessor not found, stopping route")
.stop()
.endChoice()
.end()
.log("Sending the request the URI")
.recipientList(header(Constants.HDR_ARES_URI))
.choice()
.when(useResProc)
.log(LoggingLevel.INFO, "Found the results processor using it")
.to("bean:" + resName)
.endChoice()
.otherwise()
.log(LoggingLevel.INFO, "resultProcessor not found, sending 'as is'")
.endChoice()
.end()
.log("Sending the request to all listeners")
.to( this.destinations.toArray( new String[0] ) );
Any help will be greatly appreciated.
This feature was removed with no direct replacement in CAMEL-6589.
See Migration guide:
In Camel 2.x you could have 2 or more inputs to Camel routes, however this was not supported in all use-cases in Camel, and this functionality is seldom in use. This has also been deprecated in Camel 2.x. In Camel 3 we have removed the remaining code for specifying multiple inputs to routes, and its now only possible to specify exactly only 1 input to a route.
You can always split your route definition to logical blocks with Direct endpoint. This can be also generated dynamically with for-each.
for(String uri : sources){
from(uri).to("direct:commonProcess");
}
from("direct:commonProcess")
.routeId(Constants.ROUTE_ID)
//...
.log("Sending the request to all listeners")
.to(this.destinations.toArray(new String[0]));
I am attempting to upload a csv file from a local directory to AWS S3 using Apache Camel.
Referencing the documentation found here (https://camel.apache.org/staging/components/latest/aws-s3-component.html), I tried to create a simple route like so (I have of course removed keys and other identifying information and replaced them with [FAKE_INFO]):
from("file:fileName=${in.headers[fileName]}")
.to("aws-s3://[BUCKET]?accessKey=[ACCESS_KEY]&secretKey=RAW([SECRET_KEY])®ion=US_EAST_2&prefix=TEST.csv");
This results in the following error:
error: java.lang.IllegalArgumentException: AWS S3 Key header missing apache camel
After searching a bit online I removed the prefix that is passed and instead inserted a .setHeader to route like so:
from("file:fileName=${in.headers[fileName]}")
.setHeader(S3Constants.KEY, simple("TEST.csv"))
.to("aws-s3://[BUCKET]?accessKey=[ACCESS_KEY]&secretKey=RAW([SECRET_KEY])®ion=US_EAST_2");
This works fine, as long as I am willing to hard code everything after the setHeader. However, for my particular use case I need to pass items from the exchange headers to feed the keys, bucket name, and fileName (this route is used by multiple files that go to different buckets based on different criteria which is received in the exchange headers). For some reason as soon as use setHeader to set the S3Constants.KEY, I am no longer able to access any of the exchange headers - in fact, I can't even assign the S3Constants.KEY value from an exchange header. As you can see, the fileName in the from section is assigned via an exchange header and I don't run into any issues there, so I know they are being received into the route.
Any thoughts on how I can modify this route so that it will allow me to upload files without the S3Constants and using exchange headers where appropriate?
Not sure if I understand you correct, but it sounds to me that
The problem of the question subject is already solved
Your only problem is the static destination address you want to have dynamic
To define dynamic destination addresses, there is a "dynamic to"
.toD(...)
You can use for example simple expressions in such a dynamic destination address
.toD("aws-s3://${in.header.bucket}?region=${in.header.region}&...")
See the Camel Docs (section "Dynamic To") for more details.
By the way: you write about "exchange headers". Don't confuse Exchange properties with Message headers!
Exchange properties are on the Exchange wrapper only and therefore lost with the Exchange after the Camel route has finished processing.
Message headers are on the message itself and therefore they are kept on the message even after routing it to a queue or whatever endpoint. This also implies that headers must be serializable.
You must access these two types differently. For example in Simple you get a header from the inbound message with ${in.header.myHeader} while you get an Exchange property with ${exchangeProperty.myProperty}
I'd like to log the original 'raw' request body (e.g. JSON) while using Camel Rest endpoints. What's the proper way to do this?
My setup (RouteBuilder) looks like this:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.type(MyServiceRequest.class)
.outType(MyServiceResponse.class)
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.serviceProcessor);
My problem here is that the mechanics such as storing the request as an Exchange property are 'too late' in terms of using this approach, any processors are too late in the route, i.e., the binding already took place and consumed the Request. Also the CamelHttpServletRequest's InputStream has already been read and contains no data.
The first place to use the log EIP is directly before the single processor:
from(SERVICE_CONTEXT_IN).log(LoggingLevel.INFO, "Request: ${in.body}")
.process(this.serviceProcessor);
but at that point the ${in.body} is already an instance of MyServiceRequest. The added log above simply yields Request: x.y.z.MyServiceRequest#12345678. What I'd like to log is the original JSON prior to being bound to a POJO.
There seems to be no built-in way of enabling logging of the 'raw' request in RestConfigurationDefinition nor RestDefinition.
I could get rid of the automatic JSON binding and manually read the HTTP Post request's InputStream, log and perform manual unmarshalling etc. in a dedicated processor but I would like to keep the built-in binding.
I agree there is no way to log the raw request (I assume you mean the payload going through the wire before any automatic binding) using Camel Rest endpoints.
But taking Roman Vottner into account, you may change your restConfiguration() as follows:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.componentProperty("handlers", "#yourLoggingHandler")
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
where your #yourLoggingHandler needs to be registered in your registry and implement org.eclipse.jetty.server.Handler. Please take a look at writing custom handlers at Jetty documentation http://www.eclipse.org/jetty/documentation/current/jetty-handlers.html#writing-custom-handlers.
In the end I 'solved' this by not using the REST DSL binding with a highly sophisticated processor for logging the payload:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.requestLogProcessor)
.unmarshal()
.json(JsonLibrary.Jackson, MyServiceRequest.class)
.process(this.serviceProcessor)
.marshal()
.json(JsonLibrary.Jackson);
All the requestLogProcessor does is to read the in body as InputStream, get and log the String, and eventually pass it on.
You can solve this by:
Turning the RestBindingMode to off on your specific route and logging the incoming request string as is.
After which you can convert the JSON string to your IN type object using ObjectMapper.
At the end of the route convert the java object to JSON and put it in the exchange out body, as we turned off the RestBindingMode.
rest("myService/").post()
.bindingMode(RestBindingMode.off)
.to(SERVICE_CONTEXT_IN);
In my case, streamCaching did the trick because the Stream was readable only once. Thefore I was able log but was not able to forward the body any more. I hope this might be of help to someone
I am trying to forward a large file pulled as an input stream to another service using spring's resttemplate. I have followed the answer given by #artbristol in this topic: How to forward large files with RestTemplate?
And it looks like it is setting the body of the request properly (grabbing the request with charlesproxy). The problem is that I have not set the headers correctly since I believe I need to set the content-type as multipart/formdata which I tried by adding this in the callback:
request.getHeaders().setContentType(
new MediaType("multipart", "form-data"));
But in the http headers I am still missing the boundary, not sure how to set that and I am sure I am probably missing some other settings.
So I was able to figure this out. Basically I needed to create a Spring message converter that will take in the input stream and write out to the body. I also basically have to use the Form Message Converter to write out the response body as well.
So in the restTemplate I call an add message converter to add new input stream message converter. In the call back I create a multivaluemap that takes in a string and inputstream and wrap that around an HttpEntity. Then I create a new instance of the Form Message converter and call write, passing in request, and the mutlivaluemap.
It looks like the issue is that I did not include the path to htrace-core.jar in the spark class path:
spark-shell --driver-class-path /opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core.jar:/etc/hbase/conf
Seems like this is new for spark 1.x
I want to build simple camel application, which will get xml from URL and then send it to another URL.
I was trying with:
from("jetty:http://.../sitemap.xml?delay=5000")
.process(new Processor() {
.....
})
.to("http://...");
and i found couple of problems:
1) i cant get content from the URL automaticly - something is invoking only when i open web browser with the sitemap.xml, but i want, that my script will connect by it self fo every 5 seconds and receive content
2) when i'm trying to connect to localhost i have problem with socket - `java.net.SocketException: Permission denied
maybe you have some simple example to do something what i need ?
`
camel-jetty is for exposing http endpoints, you need to use camel-http4 to consume from remote http sites...
also, use camel-timer for periodic operations like this...
from("timer://foo?fixedRate=true&delay=0&period=5000")
.to("http4://.../sitemap.xml")
...;