I have a Camel route that needs to receive a XML file from FTP as a stream, validate it and split it.
Everything works fine all the way to the validation, but then the split doesn't work as expected. When debugging, I found the split process doesn't find any processor when the original message is a stream. It looks very much like a bug to me.
from("direct:start")
.pollEnrich("ftp://user#host:21?fileName=file.xml&streamDownload=true&password=xxxx&fastExistsCheck=true&soTimeout=300000&disconnect=true")
.to("validator:myXsd.xsd")
.split().tokenizeXML("myTag")
.to(to)
.end();
In this case I can see the Exchange getting in the splitter, but no processor is found and the split does nothing. the behavior is different if I remove the validation:
from("direct:start")
.pollEnrich("ftp://user#host:21?fileName=file.xml&streamDownload=true&password=xxxx&fastExistsCheck=true&soTimeout=300000&disconnect=true")
.split().tokenizeXML("myTag")
.to(to)
.end();
In this case, the splitter works fine.
Also, if the XML file doesn't come from a stream, then everything is fine.
from("file:file.xml")
.to("validator:myXsd.xsd")
.split().tokenizeXML("myTag")
.to(to)
.end();
I update my Camel version to 2.15.2 but still get the same error.
I don't know how validator works, but if is changing message body, try to store it as a header or property, for example: .setHeader("headerName",simple("${body}")) and after validator .setBody(simple("${header.headerName}"))
The problem that I was trying to pass a body that was a stream. (streamDownload=true). The validator will read the stream and validate the content. No problem.
But the problem comes when the split arrives, the stream was already read and closed. So the split can't do anything with the stream.
I already worked around the problem without a stream, but I guess working with streamcaching would also work if a stream is necessary.
See http://camel.apache.org/why-is-my-message-body-empty.html
Related
I'm trying to create an MUnit test that mocks an HTTP request by setting the payload to a JSON object that I have saved in a file. In Mule 3 I would have just done getResource('fileName.json').asString() and that worked just fine. In Mule 4 though, I can't statically call getResource.
I found a forum post on the Mulesoft forums that suggested I use MunitTools::getResourceAsString. When I run my test, I do see the JSON object but with all the \n and \r characters as well as a \ escaping all of the quotation marks. Obviously this means my JSON is no longer well formed.
Ideally I would like to find a reference for MunitTools so that I can see a list of functions that I can call and maybe find one that does not add the escape characters, but I haven't had any luck. If anybody knows of a some reference document that I can refer to, please let me know.
Not being able to find a way to return the data without the extra characters, I tried replacing them via dataweave. This is fine when replacing \n and \r, but as there are also more \s in front of each double quote and I can't seem to make these go away.
If I do this...
replace (/\/) with ("")
...I get an error. A co-worker suggested targeting the each \" and replacing them with ", but that's a problem because that gives me """. To get around this, I've tried
replace(/\"/) with "\""
...which does not cause any errors, but for some reason it reads the \ as a literal so it replaces the original string with itself. I've also tried...
replace(/\"/) with '"'
...but that also results in an error
I'm open to any other solutions as well.
Thanks
--Drew
I had the same concern so I started using the readUrl() method. This is a DataWeave method so you should be able to use it in any MUnit processor. Here is an example of how I used it in the set event processor. It reads the JSON file and then converts it into Java for my own needs but you can just replace java with JSON for your needs.
<munit:set-event doc:name="Set Event" doc:id="e7b1da19-f746-4964-a7ae-c23aedce5e6f" >
<munit:payload mediaType="application/java" value="#[output application/java --- readUrl('classpath://singleItemRequest.json','application/json')]"/>
</munit:set-event>
Here is the documentation for readUrl https://docs.mulesoft.com/mule-runtime/4.2/dw-core-functions-readurl
Hope that helps!
Follow this snippet (more specifically the munit-tools:then-return tag):
<munit-tools:mock-when doc:name="Mock GET /users" doc:id="89c8b7fb-1e94-446f-b9a0-ef7840333328" processor="http:request" >
<munit-tools:with-attributes >
<munit-tools:with-attribute attributeName="doc:name" whereValue="GET /users" />
</munit-tools:with-attributes>
<munit-tools:then-return>
<munit-tools:payload value="#[read(MunitTools::getResourceAsString('examples/responses/anypoint-get-users-response.json'), "application/json")]" />
</munit-tools:then-return>
</munit-tools:mock-when>
It mocks an HTTP request and returns a JSON object using the read() function.
I have the following code
DataFormat bindy = new BindyCsvDataFormat(Employee.class);
from("file:src/main/resources/csv2?noop=true").routeId("route3").unmarshal(bindy).to("mock:result").log("${body[0].name}");
I am trying to log every line of the csv file, currently I am only able to hardcode it to print.
Do I have to use Loop even I don't know the number of lines of the csv ? Or Do I have to use processor ? Whats the easiest way to achieve what I want ?
The unmarshalling step is producing an exchange whose body is a list of tuples. For that reason you can simply use Camel splitter to slice the original exchange into 1-N sub-exchanges (one per line/item of the list) and then log each of these lines:
from("file:src/main/resources/csv2?noop=true")
.unmarshal(bindy)
.split().body()
.log("${name}");
If you do not want to alter the original message, you can use the wiretap pattern in order to log a copy of the exchange:
from("file:src/main/resources/csv2?noop=true")
.unmarshal(bindy)
.wireTap("direct:logBody")
.to("mock:result");
from("direct:logBody")
.split().body()
.log("Row# ${exchangeProperty.CamelSplitIndex} : ${name}");
The problem:
I need to process different huge XML files. Each file contains a certain node which I can use to identify the incoming xml message by. Based on the node/tag the message should be send to a dedicated recipient.
The XML message should not be converted to String and then checked with contains as this would be really inefficient. Rather xpath should be used to "probe" the message for the occurrence of the expected node.
The solution should be based on camel's Java DSL. The code:
from("queue:foo")
.choice().xpath("//foo")).to("queue:bar")
.otherwise().to("queue:others");
suggested in Camel's Doc does not compile. I am using Apache Camel 2.19.0.
This compiles:
from("queue:foo")
.choice().when(xpath("//foo"))
.to("queue:bar")
.otherwise()
.to("queue:others");
You need the .when() to test predicate expressions when building a content-based-router.
I'm new to Camel and I'd like to use it to read a XML file on a FTP server and to a assynch process for all NODE element of the XML.
Indeed, I'll use a splitter to process every node (I use a stream because the XML file is big).
from(ftp://user#host:port/...)
.split().tokenizeXML("node").streaming()
.to("seda:processNode")
.end();
Then the route to the nodeProcessor:
from("seda:processNode")
.bean(lookup(MyNodeProcessor.class))
.end();
I was wondering if it's ok to use a splitter without an aggregator? In my case, I don't need to aggregate the outcome of all processed nodes.
I was wondering if it's a problem in Camel to have many "splitted" threads going in a "dead end" instead of being aggreagated?
The examples provided by Camel show a splitter withtout an aggregator, but they still provide an aggregationStrategy with the splitter. Is it mandatory?
No this is perfect fine, you can use the splitter without the agg strategy which would be normal, like the splitter EIP: http://camel.apache.org/splitter
If you use an agg strategy then its more like this EIP: http://camel.apache.org/composed-message-processor.html which can be done with splitter only in Camel.
I need help. In my current development one of the requirements says:
The server will return 200-OK as a response(httpresponse).
If the panelist is verified then as a result, the server must also
return the panelist id of this panelist.
The server will place the panelist id inside the body of the 200-OK
response in the following way:
<tdcp>
<cmd>
<ack cmd=”Init”>
<panelistid>3849303</panelistid>
</ack>
</cmd>
Now I am able to put the httpresponse as
httpServletResponse.setStatus(HttpServletResponse.SC_OK);
And I can put
String responseToClient= "<tdcp><cmd><ack cmd=”Init”><panelistid>3849303</panelistid></ack></cmd></tdcp>";
Now what does putting the above xml inside the body of 200-OK response mean and how can it be achieved?
You can write the XML directly to the response as follows:
This example uses a ServletResponse.getWriter(), which is a PrintWriter to write a String to the response.
String responseToClient= "<tdcp><cmd><ack cmd=”Init”><panelistid>3849303</panelistid></ack></cmd></tdcp>";
httpServletResponse.setStatus(HttpServletResponse.SC_OK);
httpServletResponse.getWriter().write(responseToClient);
httpServletResponse.getWriter().flush();
You simply need to get the output stream (or output writer) of the servlet response, and write to that. See ServletResponse.getOutputStream() and ServletResponse.getWriter() for more details.
(Or simply read any servlet tutorial - without the ability to include data in response bodies, servlets would be pretty useless :)
If that's meant to be XML, Word has already spoiled things for you by changing the attribute quote symbol to ” instead of ".
It is worth having a look at JAXP if you want to generate XML using Java. Writing strings with < etc. in them won't scale and you'll run into problems with encodings of non-ASCII characters.