I tried googling this and found nothing.
I know that you can use expression language like ${file:name} to log the file name, but how do you log the contents. How do you log the contents of the json message being passed forward?
You can log the body and headers of the exchange by writing
.log(LoggingLevel.INFO, "test", "${body} and ${headers}")
Depending on how you log, but you can use a TraceFormatter with showBody set to true, read http://camel.apache.org/tracer.html
Related
I am using EWS Java API to read and process emails. One such email contains few conversation and a MS Teams meeting information at the end. While reading such an email, the EmailMessage.getBody() returns only the MS Teams meeting information and all the other contents of the email body are ommitted. Sample code below:
EmailMessage message = EmailMessage.bind(service, new ItemId(item.get(nMessagePos).getId().getUniqueId()));
String emailBody = message.getBody().toString()
I tried setting the BodyType property to both HTML and Text and then fetched the body of the email but it still returns only the Meeting invite details.
Is there any specific reason for this and is there a way for me to get the complete email body?
I would try to enable tracing https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/how-to-trace-requests-responses-to-troubleshoot-ews-managed-api-applications or look at the actually soap responses your getting it could be a parsing issue at the client side (eg bug in the library). You could also try getting the Mimecontent of the Message instead and then parse back the body from that content. Something like EWSEditor might be useful for trying to diagnose what is going on it will show you what the responses look like and allow you to test mimecontent etc without needing to write any code https://github.com/dseph/EwsEditor/releases.
I've declared a REST endpoint, which calls another route using direct. In the end of the second route I'm logging the body but it's not the same body returned to the browser.
Here's a small example reproducing the behavior (I'm using Apache Camel with Spring Boot):
#Component
public class EntidadeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
restConfiguration().bindingMode(json);
rest("/entidade")
.get("/{id}")
.to("direct:search-by-id");
from("direct:search-by-id")
.routeId("search-by-id")
.setProperty("idEntidade", simple("${header.id}"))
.pollEnrich("file:files/mock/dados?noop=true&idempotent=false")
.unmarshal().json(JsonLibrary.Jackson)
.split(jsonpath("$"))
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]"))
.marshal().json(JsonLibrary.Jackson)
.log("${body}");
}
}
I'm calling it from the browser, using the URL:
http://localhost:8080/camel/entidade/1 .
On the folder files/mock/dados I have a single file, called entidades.json where there's a JSON Array (see below).
I know the split and filter are working because I'm logging the body in that last line of code and this is what appears in the log:
2021-04-28 18:15:15.707 INFO 3905542 --- [nio-8080-exec-1] search-by-id : {"id":"1","tipo":"PF","nome":"João Maria"}
But this is what is returned to the browser (the exact content of the entidades.json file):
[{"id":"1","tipo":"PF","nome":"João Maria"},{"id":"2","tipo":"PF","nome":"Maria João"},{"id":"3","tipo":"PF","nome":"João Silva"},{"id":"4","tipo":"PF","nome":"José Souza"}]
Why the logged body is not the same that shows in the browser and to fix it?
PS: If I remove those marshal and unmarshal calls, I get the following error in the browser:
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.apache.camel.component.file.FileBinding and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: org.apache.camel.component.file.GenericFile["binding"])
Without the input-file and knowing the concrete URI you called (including given value for path-variable {id}), I can only suspect some issues as follows:
Did you provide an id at the end of the GET-call ?
Why did you convert the JSON ?
Did you test the correct split ? Did you aggregate again ?
Did you want to log each message ?
REST endpoints
You specified the endpoint as GET /entidada/{id}.
The {id} is a path-variable.
So let's assume you call GET /entidata/1 with 1 as ID.
Then your JSON file is polled (read), unmarshalled to ... ?
JSON unmarshal/marshal
These unmarshal and marshal methods are either used to convert between different data-formats or from data-representation to Java-objects (POJOs) if they are internally used (e.g. to pass to a processor Bean etc.).
I suppose the file dados contains textual data as JSON.
So you can simply read this text into a (textual!) message body (like a text-message, compare JMS etc.) and work with it: (a) split by JSON-path, (b) filter by JSON-path, (c) log this JSON result, (d) send it back as HTTP-response to the calling client (e.g. your browser).
What happens on the split?
After this you try to split (assuming you have a JSON-array):
// incoming:
// a single enriched message, in body: JSON-array with 4 elements
.split(jsonpath("$")) // what do you expect as output ?
// here the split usually should be ended using `.end` DSL method
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]")) // each array element of JSON body matched against id, e.g. 1
I filled your HTTP-response (JSON array with the 4 people) into online JSON-path evaluator. Evaluation of $ was not a split, but a single element (inside the result array): exactly the original array (with 4 people).
Why wasn't the file splitted into 4 messages, each containing a person-object?
Because your JSON-path $ denotes simply the root-element.
Plus: Usually after the .split() there follows an .end() which aggregates them again.
You left that out. I suppose that is an issue, too.
Filter works as expected
Later you filtered on the REST-given id:
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]"))`
This results in the logged element:
{"id":"1","tipo":"PF","nome":"João Maria"}
The filter worked successfully: just leaving a single one with id 1.
Logging in Camel
Log EIP
When adding .log() to the route chain, this means you are using the Log EIP. In its documentation a warning Tip is given:
Logging message body with streamed messages:
If the message body is stream based, then logging the message body, may cause the message body to be empty afterwards. See this FAQ. For streamed messages you can use Stream caching to allow logging the message body and be able to read the message body afterwards again.
So your empty log message may be caused by a side-effect when using this for logging stream-based message bodies.
Difference of log methods
Explained in section Difference between log in the DSL and Log component:
The log DSL is much lighter and meant for logging human logs such as Starting to do … etc.
Below example (adjusted to your REST route) illustrates its usage:
rest("/entidade")
.get("/{id}")
.log("Processing ${id}")
.to("bean:foo");
Log component
I would suggest using the standard Log component by simply using .to() DSL passing a URI string based on schema log: together with the required parameter loggerName.
.to("log:filtered-json")
Here the URI-prefix for Log component is log:. Each message of the stream is logged using loggerName filtered-json.
The error was I need to pass an AggregationStrategy to the split. I also need to stop logging the body, because it was consuming the InputStream. After that, I could safely remove those marshal and unmarshal calls.
This is the final code:
#Component
public class EntidadeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
restConfiguration().bindingMode(json);
rest("/entidade")
.get("/{id}")
.to("direct:search-by-id");
from("direct:search-by-id")
.routeId("search-by-id")
.setProperty("idEntidade", simple("${header.id}"))
.pollEnrich("file:files/mock/dados?noop=true&idempotent=false")
.split(jsonpath("$"), takeFirst(Exchange.FILTER_MATCHED))
.filter(jsonpath("$[?(#.id == ${property.idEntidade})]")).stop()
.end();
}
private AggregationStrategy takeFirst(String matchProperty) {
return ((oldExchange, newExchange) -> {
if (oldExchange == null && newExchange.getProperty(matchProperty, Boolean.class)) {
oldExchange = newExchange;
}
return oldExchange;
});
}
}
I'd like to log the original 'raw' request body (e.g. JSON) while using Camel Rest endpoints. What's the proper way to do this?
My setup (RouteBuilder) looks like this:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.type(MyServiceRequest.class)
.outType(MyServiceResponse.class)
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.serviceProcessor);
My problem here is that the mechanics such as storing the request as an Exchange property are 'too late' in terms of using this approach, any processors are too late in the route, i.e., the binding already took place and consumed the Request. Also the CamelHttpServletRequest's InputStream has already been read and contains no data.
The first place to use the log EIP is directly before the single processor:
from(SERVICE_CONTEXT_IN).log(LoggingLevel.INFO, "Request: ${in.body}")
.process(this.serviceProcessor);
but at that point the ${in.body} is already an instance of MyServiceRequest. The added log above simply yields Request: x.y.z.MyServiceRequest#12345678. What I'd like to log is the original JSON prior to being bound to a POJO.
There seems to be no built-in way of enabling logging of the 'raw' request in RestConfigurationDefinition nor RestDefinition.
I could get rid of the automatic JSON binding and manually read the HTTP Post request's InputStream, log and perform manual unmarshalling etc. in a dedicated processor but I would like to keep the built-in binding.
I agree there is no way to log the raw request (I assume you mean the payload going through the wire before any automatic binding) using Camel Rest endpoints.
But taking Roman Vottner into account, you may change your restConfiguration() as follows:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.componentProperty("handlers", "#yourLoggingHandler")
.contextPath(this.contextPath)
.bindingMode(RestBindingMode.json);
where your #yourLoggingHandler needs to be registered in your registry and implement org.eclipse.jetty.server.Handler. Please take a look at writing custom handlers at Jetty documentation http://www.eclipse.org/jetty/documentation/current/jetty-handlers.html#writing-custom-handlers.
In the end I 'solved' this by not using the REST DSL binding with a highly sophisticated processor for logging the payload:
restConfiguration().component("jetty")
.host(this.host)
.port(this.port)
.contextPath(this.contextPath);
rest("myService/").post()
.produces("application/json; charset=UTF-8")
.to(SERVICE_CONTEXT_IN);
from(SERVICE_CONTEXT_IN).process(this.requestLogProcessor)
.unmarshal()
.json(JsonLibrary.Jackson, MyServiceRequest.class)
.process(this.serviceProcessor)
.marshal()
.json(JsonLibrary.Jackson);
All the requestLogProcessor does is to read the in body as InputStream, get and log the String, and eventually pass it on.
You can solve this by:
Turning the RestBindingMode to off on your specific route and logging the incoming request string as is.
After which you can convert the JSON string to your IN type object using ObjectMapper.
At the end of the route convert the java object to JSON and put it in the exchange out body, as we turned off the RestBindingMode.
rest("myService/").post()
.bindingMode(RestBindingMode.off)
.to(SERVICE_CONTEXT_IN);
In my case, streamCaching did the trick because the Stream was readable only once. Thefore I was able log but was not able to forward the body any more. I hope this might be of help to someone
I am working on a use case where I am displaying user's messages on a JSP. Details of the flow are:
All the messages will be shown in a table with icon for attachments
When the user clicks on attachment, the file should get downloaded.
If there is more than one attachment, user can select the required
one to download.
The attachments will be stored on the local filesystem and the path for the attachments will be determined by the system.
I have tried to implement by referring to these SO questions:
Input and Output binary streams using JERSEY?
Return a file using Java Jersey
file downloading in restful web services
However, it's not solving my purpose. I have the following questions:
Is it possible to send message data (like subject, message, message id, etc) along with the attachments (Inputstream) in one response?
If yes, what needs to be the MediaType for #Produces annotation in my resource method? Currently my resource is annotated with #Produces(MediaType.APPLICATION_JSON). Will this work?
How to send the file data in the response?
Any pointers appreciated. TIA.
You can add custom data to the response Header, so yes you are able to send such message data. Add the data to the response Header.
#Produces(MediaType.APPLICATION_JSON) will not work, unless the clients will accept JSON as a file, what they should and will not do ;)
The correct MediaType depends on what kind of file you want to submit.
You can use the default MediaType / MIME-Type MediaType.APPLICATION_OCTET_STREAM / application/octet-stream (Is there a “default”
MIME type?) but I think it's better to use the correct and exact MIME-Type for your file.
You will find working examples for sending file data with jersey in Input and Output binary streams using JERSEY? - so there is no need to answer this again :)
Hope this was helpful somehow, have a nice day.
I've a simple java RESTful service which queries database and send the response back based on the request parameter. I need to generate some simple report based on the apache access_log, for example number of queries/day, number of similar queries, etc.
One of report I need to generate is to list queries which returns zero result. I'm wondering how to achieve this. I can't rely on the response size in apache log, since the the response xml with zero result will still be returned.
I'm thinking of setting a custom cookie if the query returns no result and have it printed in apache log..
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{Cookie}i\"" combined-cookie
Not sure if this will work or to be honest, is this is the right approach.
Any pointers will be highly appreciated.
Thanks
If you know for sure that the "no results" response is NNN bytes, and you know that any other response would be different (larger), then you could potentially query your access log for responses of size NNN. But that's a bit of a hack, and it's brittle if the size of an empty response changes for whatever reason.
I don't think Apache has any built-in capability to inspect the content of a response and set variables based on some property of the data. (You could potentially do something very hacky with mod_ext_filter, but it's not worth the hassle and the performance would likely suffer.)
It sounds like you already have the ability to change the server code that's producing the response. Since that's the case, I would not try to use Apache logging. Instead, I would add some additional logging capability to your server. Every response could output a line to a different log file. The lines could look like this:
2012-06-14 14:02:15.345 count=0 status=Completed
2012-06-14 14:02:15.906 count=12 status=Completed
...
Then the type of reporting you need becomes easier.
But if you absolutely have to do it with Apache, then my suggestion would be to invent a new HTTP header, something like X-Query-Result, and then tweak your server to set that header on every response. For example:
X-Query-Result: count=0 status=Completed
Then similar to what you suggested, use \"%{X-Query-Result}i\" in your log format. I'd choose this over cookies just to avoid the extra "weight" associated with cookies.